Jan 31 14:41:34 crc systemd[1]: Starting Kubernetes Kubelet... Jan 31 14:41:34 crc restorecon[4694]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:34 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:41:35 crc restorecon[4694]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 31 14:41:35 crc restorecon[4694]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 31 14:41:36 crc kubenswrapper[4751]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:41:36 crc kubenswrapper[4751]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 31 14:41:36 crc kubenswrapper[4751]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:41:36 crc kubenswrapper[4751]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:41:36 crc kubenswrapper[4751]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 31 14:41:36 crc kubenswrapper[4751]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.133917 4751 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.139970 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140002 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140012 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140021 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140029 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140039 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140051 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140061 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140097 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140106 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140114 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140123 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140131 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140139 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140148 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140156 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140164 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140171 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140179 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140186 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140194 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140201 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140209 4751 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140216 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140224 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140232 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140239 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140247 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140254 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140262 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140270 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140278 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140285 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140293 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140300 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140308 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140316 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140324 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140332 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140339 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140347 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140359 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140371 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140380 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140389 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140398 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140407 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140415 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140423 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140431 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140440 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140448 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140458 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140465 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140473 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140481 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140488 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140496 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140506 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140515 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140524 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140535 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140546 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140555 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140563 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140571 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140579 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140588 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140596 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140604 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.140612 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140742 4751 flags.go:64] FLAG: --address="0.0.0.0" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140758 4751 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140772 4751 flags.go:64] FLAG: --anonymous-auth="true" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140786 4751 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140801 4751 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140812 4751 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140828 4751 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140842 4751 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140853 4751 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140865 4751 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140876 4751 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140888 4751 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140901 4751 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140912 4751 flags.go:64] FLAG: --cgroup-root="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140923 4751 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140934 4751 flags.go:64] FLAG: --client-ca-file="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140944 4751 flags.go:64] FLAG: --cloud-config="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140953 4751 flags.go:64] FLAG: --cloud-provider="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140962 4751 flags.go:64] FLAG: --cluster-dns="[]" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140976 4751 flags.go:64] FLAG: --cluster-domain="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140985 4751 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.140994 4751 flags.go:64] FLAG: --config-dir="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141003 4751 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141013 4751 flags.go:64] FLAG: --container-log-max-files="5" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141031 4751 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141041 4751 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141050 4751 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141060 4751 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141100 4751 flags.go:64] FLAG: --contention-profiling="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141110 4751 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141119 4751 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141131 4751 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141140 4751 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141152 4751 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141161 4751 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141170 4751 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141179 4751 flags.go:64] FLAG: --enable-load-reader="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141188 4751 flags.go:64] FLAG: --enable-server="true" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141198 4751 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141209 4751 flags.go:64] FLAG: --event-burst="100" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141219 4751 flags.go:64] FLAG: --event-qps="50" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141228 4751 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141237 4751 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141247 4751 flags.go:64] FLAG: --eviction-hard="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141258 4751 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141267 4751 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141276 4751 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141286 4751 flags.go:64] FLAG: --eviction-soft="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141295 4751 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141304 4751 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141313 4751 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141322 4751 flags.go:64] FLAG: --experimental-mounter-path="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141331 4751 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141339 4751 flags.go:64] FLAG: --fail-swap-on="true" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141349 4751 flags.go:64] FLAG: --feature-gates="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141360 4751 flags.go:64] FLAG: --file-check-frequency="20s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141369 4751 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141378 4751 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141387 4751 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141397 4751 flags.go:64] FLAG: --healthz-port="10248" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141406 4751 flags.go:64] FLAG: --help="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141416 4751 flags.go:64] FLAG: --hostname-override="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141425 4751 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141435 4751 flags.go:64] FLAG: --http-check-frequency="20s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141445 4751 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141453 4751 flags.go:64] FLAG: --image-credential-provider-config="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141462 4751 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141471 4751 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141481 4751 flags.go:64] FLAG: --image-service-endpoint="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141490 4751 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141499 4751 flags.go:64] FLAG: --kube-api-burst="100" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141508 4751 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141517 4751 flags.go:64] FLAG: --kube-api-qps="50" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141526 4751 flags.go:64] FLAG: --kube-reserved="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141535 4751 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141544 4751 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141553 4751 flags.go:64] FLAG: --kubelet-cgroups="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141562 4751 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141593 4751 flags.go:64] FLAG: --lock-file="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141602 4751 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141612 4751 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141622 4751 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141635 4751 flags.go:64] FLAG: --log-json-split-stream="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141644 4751 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141653 4751 flags.go:64] FLAG: --log-text-split-stream="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141663 4751 flags.go:64] FLAG: --logging-format="text" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141672 4751 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141681 4751 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141690 4751 flags.go:64] FLAG: --manifest-url="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141699 4751 flags.go:64] FLAG: --manifest-url-header="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141712 4751 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141721 4751 flags.go:64] FLAG: --max-open-files="1000000" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141732 4751 flags.go:64] FLAG: --max-pods="110" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141741 4751 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141750 4751 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141761 4751 flags.go:64] FLAG: --memory-manager-policy="None" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141770 4751 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141779 4751 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141788 4751 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141797 4751 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141816 4751 flags.go:64] FLAG: --node-status-max-images="50" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141826 4751 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141835 4751 flags.go:64] FLAG: --oom-score-adj="-999" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141844 4751 flags.go:64] FLAG: --pod-cidr="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141854 4751 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141868 4751 flags.go:64] FLAG: --pod-manifest-path="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141877 4751 flags.go:64] FLAG: --pod-max-pids="-1" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141886 4751 flags.go:64] FLAG: --pods-per-core="0" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141895 4751 flags.go:64] FLAG: --port="10250" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141904 4751 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141913 4751 flags.go:64] FLAG: --provider-id="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141922 4751 flags.go:64] FLAG: --qos-reserved="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141931 4751 flags.go:64] FLAG: --read-only-port="10255" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141940 4751 flags.go:64] FLAG: --register-node="true" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141949 4751 flags.go:64] FLAG: --register-schedulable="true" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141957 4751 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141972 4751 flags.go:64] FLAG: --registry-burst="10" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141981 4751 flags.go:64] FLAG: --registry-qps="5" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141990 4751 flags.go:64] FLAG: --reserved-cpus="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.141998 4751 flags.go:64] FLAG: --reserved-memory="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142009 4751 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142019 4751 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142028 4751 flags.go:64] FLAG: --rotate-certificates="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142037 4751 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142046 4751 flags.go:64] FLAG: --runonce="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142054 4751 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142089 4751 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142098 4751 flags.go:64] FLAG: --seccomp-default="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142108 4751 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142116 4751 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142126 4751 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142135 4751 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142145 4751 flags.go:64] FLAG: --storage-driver-password="root" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142153 4751 flags.go:64] FLAG: --storage-driver-secure="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142162 4751 flags.go:64] FLAG: --storage-driver-table="stats" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142171 4751 flags.go:64] FLAG: --storage-driver-user="root" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142179 4751 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142190 4751 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142199 4751 flags.go:64] FLAG: --system-cgroups="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142208 4751 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142223 4751 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142232 4751 flags.go:64] FLAG: --tls-cert-file="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142242 4751 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142253 4751 flags.go:64] FLAG: --tls-min-version="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142262 4751 flags.go:64] FLAG: --tls-private-key-file="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142271 4751 flags.go:64] FLAG: --topology-manager-policy="none" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142280 4751 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142289 4751 flags.go:64] FLAG: --topology-manager-scope="container" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142298 4751 flags.go:64] FLAG: --v="2" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142309 4751 flags.go:64] FLAG: --version="false" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142320 4751 flags.go:64] FLAG: --vmodule="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142331 4751 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.142341 4751 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142541 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142551 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142561 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142569 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142579 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142589 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142599 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142609 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142618 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142627 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142637 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142646 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142656 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142664 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142673 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142683 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142691 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142700 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142710 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142717 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142725 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142733 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142740 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142749 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142757 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142765 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142772 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142780 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142788 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142795 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142803 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142812 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142819 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142827 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142835 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142843 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142851 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142859 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142866 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142874 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142882 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142890 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142898 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142908 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142918 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142927 4751 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142935 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142943 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142951 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142959 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142967 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142974 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142985 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.142994 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143003 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143011 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143019 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143028 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143036 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143045 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143053 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143060 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143093 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143101 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143109 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143116 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143124 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143132 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143140 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143147 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.143156 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.143181 4751 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.156466 4751 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.156517 4751 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156654 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156668 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156678 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156688 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156697 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156707 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156716 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156725 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156735 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156747 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156757 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156767 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156775 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156783 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156793 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156801 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156810 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156819 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156827 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156835 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156843 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156851 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156859 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156867 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156876 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156885 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156893 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156902 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156909 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156917 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156924 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156932 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156940 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156947 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156955 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156966 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156976 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.156985 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157016 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157024 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157032 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157040 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157048 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157057 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157064 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157095 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157103 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157114 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157124 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157134 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157166 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157174 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157183 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157194 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157202 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157210 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157218 4751 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157226 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157234 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157244 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157254 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157262 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157270 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157278 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157285 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157293 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157300 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157308 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157316 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157323 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157331 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.157344 4751 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157643 4751 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157657 4751 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157665 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157673 4751 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157681 4751 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157692 4751 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157702 4751 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157711 4751 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157719 4751 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157727 4751 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157735 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157743 4751 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157750 4751 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157758 4751 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157779 4751 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157787 4751 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157795 4751 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157802 4751 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157810 4751 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157818 4751 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157829 4751 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157839 4751 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157849 4751 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157857 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157865 4751 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157873 4751 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157881 4751 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157890 4751 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157899 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157907 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157914 4751 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157922 4751 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157930 4751 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157937 4751 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157945 4751 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157952 4751 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157960 4751 feature_gate.go:330] unrecognized feature gate: Example Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157968 4751 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157976 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157984 4751 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157992 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.157999 4751 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158008 4751 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158015 4751 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158023 4751 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158031 4751 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158038 4751 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158047 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158054 4751 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158062 4751 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158108 4751 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158117 4751 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158125 4751 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158133 4751 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158141 4751 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158148 4751 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158156 4751 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158164 4751 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158171 4751 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158179 4751 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158187 4751 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158195 4751 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158202 4751 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158210 4751 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158217 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158227 4751 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158236 4751 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158244 4751 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158252 4751 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158262 4751 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.158272 4751 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.158284 4751 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.159262 4751 server.go:940] "Client rotation is on, will bootstrap in background" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.172401 4751 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.173131 4751 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.175223 4751 server.go:997] "Starting client certificate rotation" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.175290 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.176361 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2026-01-05 03:29:51.269274361 +0000 UTC Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.176464 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.207471 4751 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 14:41:36 crc kubenswrapper[4751]: E0131 14:41:36.213165 4751 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.213338 4751 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.228026 4751 log.go:25] "Validated CRI v1 runtime API" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.271963 4751 log.go:25] "Validated CRI v1 image API" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.274627 4751 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.281534 4751 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-31-14-37-27-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.281584 4751 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.314672 4751 manager.go:217] Machine: {Timestamp:2026-01-31 14:41:36.311834054 +0000 UTC m=+0.686546999 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654128640 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef BootID:2bc08d22-1e39-4800-b402-ea260cc19637 Filesystems:[{Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365412864 Type:vfs Inodes:821634 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108170 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827064320 Type:vfs Inodes:4108170 HasInodes:true} {Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:e8:75:38 Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:e8:75:38 Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:37:66:ad Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:b8:87:0c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:2b:85:13 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:6a:e2:76 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:e2:31:b3:08:bc:b9 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:46:ee:c5:32:e1:b8 Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654128640 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.315055 4751 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.315391 4751 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.315899 4751 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.316213 4751 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.316273 4751 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.316625 4751 topology_manager.go:138] "Creating topology manager with none policy" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.316645 4751 container_manager_linux.go:303] "Creating device plugin manager" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.317144 4751 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.317195 4751 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.317477 4751 state_mem.go:36] "Initialized new in-memory state store" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.318216 4751 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.323386 4751 kubelet.go:418] "Attempting to sync node with API server" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.323431 4751 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.323473 4751 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.323494 4751 kubelet.go:324] "Adding apiserver pod source" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.323517 4751 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.326736 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 31 14:41:36 crc kubenswrapper[4751]: E0131 14:41:36.326855 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.326881 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 31 14:41:36 crc kubenswrapper[4751]: E0131 14:41:36.326958 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.328966 4751 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.330495 4751 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.332210 4751 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.333828 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.333873 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.333887 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.333902 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.333923 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.333937 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.333950 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.333999 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.334014 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.334029 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.334108 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.334143 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.335186 4751 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.336155 4751 server.go:1280] "Started kubelet" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.336287 4751 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.336978 4751 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.337145 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.337903 4751 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 31 14:41:36 crc systemd[1]: Started Kubernetes Kubelet. Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.339164 4751 server.go:460] "Adding debug handlers to kubelet server" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.341242 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.341361 4751 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.342192 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 16:23:34.700402174 +0000 UTC Jan 31 14:41:36 crc kubenswrapper[4751]: E0131 14:41:36.342839 4751 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.343179 4751 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.343245 4751 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.343285 4751 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.344125 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 31 14:41:36 crc kubenswrapper[4751]: E0131 14:41:36.344292 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:41:36 crc kubenswrapper[4751]: E0131 14:41:36.344765 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="200ms" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.352340 4751 factory.go:55] Registering systemd factory Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.352393 4751 factory.go:221] Registration of the systemd container factory successfully Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.353052 4751 factory.go:153] Registering CRI-O factory Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.353129 4751 factory.go:221] Registration of the crio container factory successfully Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.353264 4751 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.353299 4751 factory.go:103] Registering Raw factory Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.353327 4751 manager.go:1196] Started watching for new ooms in manager Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.354324 4751 manager.go:319] Starting recovery of all containers Jan 31 14:41:36 crc kubenswrapper[4751]: E0131 14:41:36.352534 4751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188fd7d6d88ee3c4 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 14:41:36.336110532 +0000 UTC m=+0.710823457,LastTimestamp:2026-01-31 14:41:36.336110532 +0000 UTC m=+0.710823457,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367349 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367441 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367470 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367493 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367512 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367532 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367551 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367570 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367594 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367616 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367635 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367655 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367676 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367698 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367717 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367782 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367803 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367822 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367842 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367861 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367880 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367900 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367922 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367941 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367960 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.367981 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368003 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368023 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368043 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368063 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368114 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368170 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368197 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368216 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368234 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368253 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368272 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368291 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368310 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368328 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368348 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368367 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368387 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368407 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368425 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368443 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368465 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368499 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368529 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368555 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368583 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368608 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368640 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368669 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368696 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368725 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368756 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368783 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368808 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368827 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368845 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368865 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368885 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368903 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368934 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368953 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.368977 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.369002 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.369044 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.369104 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.369130 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.369154 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.369181 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.369206 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.369233 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.369259 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.369281 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.369307 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.369333 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372362 4751 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372445 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372473 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372497 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372520 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372543 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372566 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372594 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372623 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372654 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372684 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372741 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372774 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372802 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372834 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372862 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372891 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372920 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372943 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372962 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.372983 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373002 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373023 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373043 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373061 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373145 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373194 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373224 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373255 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373285 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373313 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373337 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373356 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373383 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373403 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373424 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373445 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373811 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373830 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373853 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373872 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373891 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373930 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373969 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.373994 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374017 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374042 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374127 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374176 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374204 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374244 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374280 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374308 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374332 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374361 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374406 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374435 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374460 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374483 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374506 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374526 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374544 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374567 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374593 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374617 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374646 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374665 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374683 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374700 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374726 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374749 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374765 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374781 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374804 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374828 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374851 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374866 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374885 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374902 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374918 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374948 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374967 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.374987 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.375005 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.375023 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.375041 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.375059 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.375137 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.375325 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.375356 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378030 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378106 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378127 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378146 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378165 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378188 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378206 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378238 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378255 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378284 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378304 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378323 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378339 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378357 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378374 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378393 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378411 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378428 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378444 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378458 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378475 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378490 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378507 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378526 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378552 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378572 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378590 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378608 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378627 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378683 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378707 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378728 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378747 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378766 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378784 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378800 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378819 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378836 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378854 4751 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378872 4751 reconstruct.go:97] "Volume reconstruction finished" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.378886 4751 reconciler.go:26] "Reconciler: start to sync state" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.381848 4751 manager.go:324] Recovery completed Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.393185 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.395594 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.395656 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.395670 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.396430 4751 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.396453 4751 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.396492 4751 state_mem.go:36] "Initialized new in-memory state store" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.402695 4751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.404545 4751 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.404585 4751 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.404621 4751 kubelet.go:2335] "Starting kubelet main sync loop" Jan 31 14:41:36 crc kubenswrapper[4751]: E0131 14:41:36.404693 4751 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.407628 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 31 14:41:36 crc kubenswrapper[4751]: E0131 14:41:36.407717 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.415514 4751 policy_none.go:49] "None policy: Start" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.417313 4751 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.417348 4751 state_mem.go:35] "Initializing new in-memory state store" Jan 31 14:41:36 crc kubenswrapper[4751]: E0131 14:41:36.443611 4751 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.468254 4751 manager.go:334] "Starting Device Plugin manager" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.468475 4751 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.468569 4751 server.go:79] "Starting device plugin registration server" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.469231 4751 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.469364 4751 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.469902 4751 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.470352 4751 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.470462 4751 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 31 14:41:36 crc kubenswrapper[4751]: E0131 14:41:36.478392 4751 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.505269 4751 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.505384 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.509424 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.509479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.509497 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.510031 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.510113 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.510381 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.511615 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.511676 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.511694 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.511788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.511828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.511846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.512044 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.512280 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.512344 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.513109 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.513155 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.513173 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.513326 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.513547 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.513602 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.513624 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.513652 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.513665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.514673 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.514699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.514767 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.514724 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.514824 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.514861 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.515047 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.515139 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.515176 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.516328 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.516355 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.516369 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.516441 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.516506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.516524 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.516902 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.516988 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.518282 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.518316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.518329 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:36 crc kubenswrapper[4751]: E0131 14:41:36.545950 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="400ms" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.570592 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.571950 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.572002 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.572016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.572043 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:41:36 crc kubenswrapper[4751]: E0131 14:41:36.572686 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.580864 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.580918 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.581186 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.581250 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.581401 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.581481 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.581531 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.581576 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.581620 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.581682 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.581729 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.581775 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.581820 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.581861 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.581901 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683125 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683217 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683256 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683317 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683353 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683360 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683387 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683415 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683480 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683433 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683531 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683551 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683553 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683590 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683641 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683642 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683653 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683687 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683749 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683800 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683818 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683843 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683883 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683918 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683920 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683966 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683970 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.684001 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.684062 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.683885 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.773692 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.775724 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.775790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.775813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.775854 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:41:36 crc kubenswrapper[4751]: E0131 14:41:36.776626 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.854513 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.867345 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.896447 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.919024 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-fe2b08f694193d48aba43b738bc3ad0e34f6dade5d9923f898bce86b51df2cd4 WatchSource:0}: Error finding container fe2b08f694193d48aba43b738bc3ad0e34f6dade5d9923f898bce86b51df2cd4: Status 404 returned error can't find the container with id fe2b08f694193d48aba43b738bc3ad0e34f6dade5d9923f898bce86b51df2cd4 Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.924143 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-88b00d0aa2e6a00ded22705bbdd4e1c09877c9a7026ebc2131992b951ea468f8 WatchSource:0}: Error finding container 88b00d0aa2e6a00ded22705bbdd4e1c09877c9a7026ebc2131992b951ea468f8: Status 404 returned error can't find the container with id 88b00d0aa2e6a00ded22705bbdd4e1c09877c9a7026ebc2131992b951ea468f8 Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.924429 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: I0131 14:41:36.932791 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.933654 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2139d3e2895fc6797b9c76a1b4c9886d.slice/crio-3d8f5f7217a27dd1cc29e7316d4f3457b960feaa8a38ea18ee8ce25778480b97 WatchSource:0}: Error finding container 3d8f5f7217a27dd1cc29e7316d4f3457b960feaa8a38ea18ee8ce25778480b97: Status 404 returned error can't find the container with id 3d8f5f7217a27dd1cc29e7316d4f3457b960feaa8a38ea18ee8ce25778480b97 Jan 31 14:41:36 crc kubenswrapper[4751]: E0131 14:41:36.947671 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="800ms" Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.949978 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-3a08a017395c4b14c93b205b62a2857317a99bc053cf38153b08b0ddfab6d072 WatchSource:0}: Error finding container 3a08a017395c4b14c93b205b62a2857317a99bc053cf38153b08b0ddfab6d072: Status 404 returned error can't find the container with id 3a08a017395c4b14c93b205b62a2857317a99bc053cf38153b08b0ddfab6d072 Jan 31 14:41:36 crc kubenswrapper[4751]: W0131 14:41:36.952760 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-76b06490ea6d342f0413a5f40a449f0749d6666ab75911ec1426cbcfa4258c9e WatchSource:0}: Error finding container 76b06490ea6d342f0413a5f40a449f0749d6666ab75911ec1426cbcfa4258c9e: Status 404 returned error can't find the container with id 76b06490ea6d342f0413a5f40a449f0749d6666ab75911ec1426cbcfa4258c9e Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.177164 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.179724 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.179804 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.179823 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.179868 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:41:37 crc kubenswrapper[4751]: E0131 14:41:37.180932 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.338102 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.343191 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 16:30:09.415993748 +0000 UTC Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.410572 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3a08a017395c4b14c93b205b62a2857317a99bc053cf38153b08b0ddfab6d072"} Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.411486 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"3d8f5f7217a27dd1cc29e7316d4f3457b960feaa8a38ea18ee8ce25778480b97"} Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.414436 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"88b00d0aa2e6a00ded22705bbdd4e1c09877c9a7026ebc2131992b951ea468f8"} Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.416182 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"fe2b08f694193d48aba43b738bc3ad0e34f6dade5d9923f898bce86b51df2cd4"} Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.420701 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"76b06490ea6d342f0413a5f40a449f0749d6666ab75911ec1426cbcfa4258c9e"} Jan 31 14:41:37 crc kubenswrapper[4751]: W0131 14:41:37.424457 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 31 14:41:37 crc kubenswrapper[4751]: E0131 14:41:37.424518 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:41:37 crc kubenswrapper[4751]: W0131 14:41:37.609460 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 31 14:41:37 crc kubenswrapper[4751]: E0131 14:41:37.609567 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:41:37 crc kubenswrapper[4751]: E0131 14:41:37.748884 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="1.6s" Jan 31 14:41:37 crc kubenswrapper[4751]: W0131 14:41:37.787557 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 31 14:41:37 crc kubenswrapper[4751]: E0131 14:41:37.787668 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:41:37 crc kubenswrapper[4751]: W0131 14:41:37.803434 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 31 14:41:37 crc kubenswrapper[4751]: E0131 14:41:37.803516 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.981696 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.983875 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.983936 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.983954 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:37 crc kubenswrapper[4751]: I0131 14:41:37.983989 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:41:37 crc kubenswrapper[4751]: E0131 14:41:37.984620 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.288770 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 14:41:38 crc kubenswrapper[4751]: E0131 14:41:38.290342 4751 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.338016 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.344219 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 17:22:23.359146688 +0000 UTC Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.427448 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c"} Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.427517 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb"} Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.427537 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853"} Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.429844 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254" exitCode=0 Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.429926 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254"} Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.430106 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.431581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.431637 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.431656 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.432664 4751 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="ccd9efb7096722c8a48318444b235a1970fbec711faf7448d47696ff84da5d37" exitCode=0 Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.432768 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.432778 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"ccd9efb7096722c8a48318444b235a1970fbec711faf7448d47696ff84da5d37"} Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.433990 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.434351 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.434393 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.434409 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.435164 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.435242 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.435269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.435607 4751 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9" exitCode=0 Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.435673 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9"} Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.435716 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.437286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.437373 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.437391 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.440173 4751 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="a0845dfce4ee156b5b52e07b6257d62908413eba9570b3767b9f00724e81e034" exitCode=0 Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.440240 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.440246 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"a0845dfce4ee156b5b52e07b6257d62908413eba9570b3767b9f00724e81e034"} Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.441404 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.441456 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:38 crc kubenswrapper[4751]: I0131 14:41:38.441473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:39 crc kubenswrapper[4751]: W0131 14:41:39.039680 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 31 14:41:39 crc kubenswrapper[4751]: E0131 14:41:39.040004 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.338953 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.345369 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 16:35:23.180008567 +0000 UTC Jan 31 14:41:39 crc kubenswrapper[4751]: E0131 14:41:39.350193 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="3.2s" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.448415 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17"} Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.448570 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.449895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.449947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.449963 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.452515 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3"} Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.452562 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea"} Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.452582 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218"} Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.452598 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff"} Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.454957 4751 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="d1020dca4733e38925646f97eb80524c4060630e33323e9a5a0fdc4634c6b468" exitCode=0 Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.455155 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.455280 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"d1020dca4733e38925646f97eb80524c4060630e33323e9a5a0fdc4634c6b468"} Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.456138 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.456170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.456186 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.460987 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.460986 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"a3a6478c4477b785bcb405d597f1c835faaf4ef7adb3a2bcd6e70cc2e692f44d"} Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.461160 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"f41a7c5739a571e6f3ec88c3798ad2604382b9320c44ddda3d41681a64c6ab2e"} Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.461196 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"c56c1e31014f9e3d0be8140f58cff1c752ad4be1c6c60a942bc18320bbd37b6a"} Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.462764 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.462804 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.462816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.470774 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"3e9f62b49c0d916da6e1631f3216d52fd37ab407e878dc0509ccb19d0e5fb1df"} Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.470836 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.471948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.471976 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.472009 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.585556 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.592910 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.592951 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.592964 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.592993 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:41:39 crc kubenswrapper[4751]: E0131 14:41:39.593547 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.98:6443: connect: connection refused" node="crc" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.776724 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:39 crc kubenswrapper[4751]: W0131 14:41:39.783871 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.98:6443: connect: connection refused Jan 31 14:41:39 crc kubenswrapper[4751]: E0131 14:41:39.783972 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.98:6443: connect: connection refused" logger="UnhandledError" Jan 31 14:41:39 crc kubenswrapper[4751]: I0131 14:41:39.785892 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.345739 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 11:06:37.66158861 +0000 UTC Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.479759 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2"} Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.479952 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.481320 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.481367 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.481389 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.483394 4751 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="bf0f78147bc50d98a5ba239c2456467778fb4724433d914b9ee4300ce3af6e4a" exitCode=0 Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.483548 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.483582 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.483631 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.483627 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"bf0f78147bc50d98a5ba239c2456467778fb4724433d914b9ee4300ce3af6e4a"} Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.483641 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.483720 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.483750 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.488277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.488334 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.488323 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.488433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.488464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.488488 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.488641 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.488716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.488745 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.490221 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.490264 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:40 crc kubenswrapper[4751]: I0131 14:41:40.490359 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.345908 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 14:18:47.640302556 +0000 UTC Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.490829 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.490857 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.490883 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.490816 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"e66ea760a35f4e073d5ead7b0270164010b4dd14737b23202f83a10290f75d3c"} Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.491003 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"7f4a4eb52c2c850f91c212fdc556452ab8cc91168ddb67c2078b806d8725be2a"} Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.491064 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"c0b0fe57d51f2684ba60b1818c1e3010e5364c6d196433972b46cb3c3f9b5e61"} Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.490948 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.492386 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.492438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.492454 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.493658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.493690 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.493705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.493713 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.493747 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:41 crc kubenswrapper[4751]: I0131 14:41:41.493763 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.157197 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.300249 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.347106 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 15:03:22.896582443 +0000 UTC Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.387509 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.500113 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"92d196e489f72bd3c04ada6d0ea993f0ad89eb42497efc8723720ca3a7720509"} Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.500194 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.500261 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.500314 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.500340 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.500204 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"aa739a6a66bd2196c9131cf929bdb8a133e3e40c3dfa9a105bb3ea33fa2ede20"} Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.501768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.501824 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.501851 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.501919 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.501953 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.501970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.501971 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.502009 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.502032 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.793910 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.795755 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.795815 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.795850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:42 crc kubenswrapper[4751]: I0131 14:41:42.795907 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:41:43 crc kubenswrapper[4751]: I0131 14:41:43.347512 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 16:14:36.729956994 +0000 UTC Jan 31 14:41:43 crc kubenswrapper[4751]: I0131 14:41:43.503543 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:43 crc kubenswrapper[4751]: I0131 14:41:43.504779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:43 crc kubenswrapper[4751]: I0131 14:41:43.504839 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:43 crc kubenswrapper[4751]: I0131 14:41:43.504861 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:43 crc kubenswrapper[4751]: I0131 14:41:43.603788 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:43 crc kubenswrapper[4751]: I0131 14:41:43.604166 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:43 crc kubenswrapper[4751]: I0131 14:41:43.605627 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:43 crc kubenswrapper[4751]: I0131 14:41:43.605677 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:43 crc kubenswrapper[4751]: I0131 14:41:43.605695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:43 crc kubenswrapper[4751]: I0131 14:41:43.875636 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:44 crc kubenswrapper[4751]: I0131 14:41:44.348465 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 10:55:25.590535162 +0000 UTC Jan 31 14:41:44 crc kubenswrapper[4751]: I0131 14:41:44.506916 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:44 crc kubenswrapper[4751]: I0131 14:41:44.508205 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:44 crc kubenswrapper[4751]: I0131 14:41:44.508266 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:44 crc kubenswrapper[4751]: I0131 14:41:44.508290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:45 crc kubenswrapper[4751]: I0131 14:41:45.349338 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 05:34:51.743664375 +0000 UTC Jan 31 14:41:45 crc kubenswrapper[4751]: I0131 14:41:45.693661 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 31 14:41:45 crc kubenswrapper[4751]: I0131 14:41:45.693959 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:45 crc kubenswrapper[4751]: I0131 14:41:45.695606 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:45 crc kubenswrapper[4751]: I0131 14:41:45.695646 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:45 crc kubenswrapper[4751]: I0131 14:41:45.695666 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:46 crc kubenswrapper[4751]: I0131 14:41:46.350272 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 02:57:28.635188184 +0000 UTC Jan 31 14:41:46 crc kubenswrapper[4751]: E0131 14:41:46.478527 4751 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 14:41:47 crc kubenswrapper[4751]: I0131 14:41:47.231228 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:47 crc kubenswrapper[4751]: I0131 14:41:47.231505 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:47 crc kubenswrapper[4751]: I0131 14:41:47.233006 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:47 crc kubenswrapper[4751]: I0131 14:41:47.233056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:47 crc kubenswrapper[4751]: I0131 14:41:47.233113 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:47 crc kubenswrapper[4751]: I0131 14:41:47.239807 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:47 crc kubenswrapper[4751]: I0131 14:41:47.350444 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 03:56:57.665967953 +0000 UTC Jan 31 14:41:47 crc kubenswrapper[4751]: I0131 14:41:47.514335 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:47 crc kubenswrapper[4751]: I0131 14:41:47.515581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:47 crc kubenswrapper[4751]: I0131 14:41:47.515618 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:47 crc kubenswrapper[4751]: I0131 14:41:47.515629 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:48 crc kubenswrapper[4751]: I0131 14:41:48.350913 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 11:02:58.834129059 +0000 UTC Jan 31 14:41:49 crc kubenswrapper[4751]: I0131 14:41:49.351492 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 09:59:40.555076452 +0000 UTC Jan 31 14:41:50 crc kubenswrapper[4751]: W0131 14:41:50.007132 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 31 14:41:50 crc kubenswrapper[4751]: I0131 14:41:50.007242 4751 trace.go:236] Trace[1671566675]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 14:41:40.005) (total time: 10001ms): Jan 31 14:41:50 crc kubenswrapper[4751]: Trace[1671566675]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:41:50.007) Jan 31 14:41:50 crc kubenswrapper[4751]: Trace[1671566675]: [10.001706612s] [10.001706612s] END Jan 31 14:41:50 crc kubenswrapper[4751]: E0131 14:41:50.007273 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 31 14:41:50 crc kubenswrapper[4751]: I0131 14:41:50.231439 4751 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:41:50 crc kubenswrapper[4751]: I0131 14:41:50.231533 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:41:50 crc kubenswrapper[4751]: I0131 14:41:50.339556 4751 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 31 14:41:50 crc kubenswrapper[4751]: I0131 14:41:50.352164 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 17:01:15.962187861 +0000 UTC Jan 31 14:41:50 crc kubenswrapper[4751]: W0131 14:41:50.724101 4751 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 31 14:41:50 crc kubenswrapper[4751]: I0131 14:41:50.724225 4751 trace.go:236] Trace[1279759450]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 14:41:40.722) (total time: 10002ms): Jan 31 14:41:50 crc kubenswrapper[4751]: Trace[1279759450]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (14:41:50.724) Jan 31 14:41:50 crc kubenswrapper[4751]: Trace[1279759450]: [10.002026007s] [10.002026007s] END Jan 31 14:41:50 crc kubenswrapper[4751]: E0131 14:41:50.724255 4751 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 31 14:41:50 crc kubenswrapper[4751]: I0131 14:41:50.816012 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 14:41:50 crc kubenswrapper[4751]: I0131 14:41:50.816123 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 14:41:50 crc kubenswrapper[4751]: I0131 14:41:50.823717 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 31 14:41:50 crc kubenswrapper[4751]: I0131 14:41:50.823784 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 31 14:41:51 crc kubenswrapper[4751]: I0131 14:41:51.186402 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 31 14:41:51 crc kubenswrapper[4751]: I0131 14:41:51.186738 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:51 crc kubenswrapper[4751]: I0131 14:41:51.188265 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:51 crc kubenswrapper[4751]: I0131 14:41:51.188308 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:51 crc kubenswrapper[4751]: I0131 14:41:51.188322 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:51 crc kubenswrapper[4751]: I0131 14:41:51.237447 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 31 14:41:51 crc kubenswrapper[4751]: I0131 14:41:51.352748 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 03:54:14.548595716 +0000 UTC Jan 31 14:41:51 crc kubenswrapper[4751]: I0131 14:41:51.526837 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:51 crc kubenswrapper[4751]: I0131 14:41:51.528194 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:51 crc kubenswrapper[4751]: I0131 14:41:51.528264 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:51 crc kubenswrapper[4751]: I0131 14:41:51.528282 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:51 crc kubenswrapper[4751]: I0131 14:41:51.550140 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 31 14:41:52 crc kubenswrapper[4751]: I0131 14:41:52.353487 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 17:43:07.426118064 +0000 UTC Jan 31 14:41:52 crc kubenswrapper[4751]: I0131 14:41:52.395732 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:52 crc kubenswrapper[4751]: I0131 14:41:52.395975 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:52 crc kubenswrapper[4751]: I0131 14:41:52.397482 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:52 crc kubenswrapper[4751]: I0131 14:41:52.397547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:52 crc kubenswrapper[4751]: I0131 14:41:52.397565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:52 crc kubenswrapper[4751]: I0131 14:41:52.406368 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:52 crc kubenswrapper[4751]: I0131 14:41:52.530025 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:52 crc kubenswrapper[4751]: I0131 14:41:52.530154 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:52 crc kubenswrapper[4751]: I0131 14:41:52.531537 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:52 crc kubenswrapper[4751]: I0131 14:41:52.531586 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:52 crc kubenswrapper[4751]: I0131 14:41:52.531604 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:52 crc kubenswrapper[4751]: I0131 14:41:52.531539 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:52 crc kubenswrapper[4751]: I0131 14:41:52.531726 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:52 crc kubenswrapper[4751]: I0131 14:41:52.531756 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:53 crc kubenswrapper[4751]: I0131 14:41:53.354401 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 11:00:24.715521795 +0000 UTC Jan 31 14:41:53 crc kubenswrapper[4751]: I0131 14:41:53.926569 4751 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 14:41:54 crc kubenswrapper[4751]: I0131 14:41:54.355821 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 23:55:10.154730813 +0000 UTC Jan 31 14:41:55 crc kubenswrapper[4751]: I0131 14:41:55.356751 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 21:06:31.751453667 +0000 UTC Jan 31 14:41:55 crc kubenswrapper[4751]: E0131 14:41:55.805378 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 31 14:41:55 crc kubenswrapper[4751]: I0131 14:41:55.808987 4751 trace.go:236] Trace[113917315]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 14:41:43.134) (total time: 12674ms): Jan 31 14:41:55 crc kubenswrapper[4751]: Trace[113917315]: ---"Objects listed" error: 12674ms (14:41:55.808) Jan 31 14:41:55 crc kubenswrapper[4751]: Trace[113917315]: [12.674308794s] [12.674308794s] END Jan 31 14:41:55 crc kubenswrapper[4751]: I0131 14:41:55.809063 4751 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 14:41:55 crc kubenswrapper[4751]: I0131 14:41:55.810345 4751 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 31 14:41:55 crc kubenswrapper[4751]: I0131 14:41:55.810625 4751 trace.go:236] Trace[1609368597]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (31-Jan-2026 14:41:43.600) (total time: 12209ms): Jan 31 14:41:55 crc kubenswrapper[4751]: Trace[1609368597]: ---"Objects listed" error: 12209ms (14:41:55.810) Jan 31 14:41:55 crc kubenswrapper[4751]: Trace[1609368597]: [12.209705652s] [12.209705652s] END Jan 31 14:41:55 crc kubenswrapper[4751]: I0131 14:41:55.810668 4751 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 14:41:55 crc kubenswrapper[4751]: E0131 14:41:55.811886 4751 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 31 14:41:55 crc kubenswrapper[4751]: I0131 14:41:55.822622 4751 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 14:41:55 crc kubenswrapper[4751]: I0131 14:41:55.872179 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59828->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 31 14:41:55 crc kubenswrapper[4751]: I0131 14:41:55.872327 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59828->192.168.126.11:17697: read: connection reset by peer" Jan 31 14:41:55 crc kubenswrapper[4751]: I0131 14:41:55.872250 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59838->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 31 14:41:55 crc kubenswrapper[4751]: I0131 14:41:55.872499 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:59838->192.168.126.11:17697: read: connection reset by peer" Jan 31 14:41:55 crc kubenswrapper[4751]: I0131 14:41:55.873140 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 14:41:55 crc kubenswrapper[4751]: I0131 14:41:55.873237 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 14:41:55 crc kubenswrapper[4751]: I0131 14:41:55.873776 4751 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 31 14:41:55 crc kubenswrapper[4751]: I0131 14:41:55.873835 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 31 14:41:56 crc kubenswrapper[4751]: I0131 14:41:56.357211 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 10:28:40.704318502 +0000 UTC Jan 31 14:41:56 crc kubenswrapper[4751]: E0131 14:41:56.478649 4751 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 31 14:41:56 crc kubenswrapper[4751]: I0131 14:41:56.544570 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 14:41:56 crc kubenswrapper[4751]: I0131 14:41:56.546991 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2" exitCode=255 Jan 31 14:41:56 crc kubenswrapper[4751]: I0131 14:41:56.547051 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2"} Jan 31 14:41:56 crc kubenswrapper[4751]: I0131 14:41:56.547294 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:41:56 crc kubenswrapper[4751]: I0131 14:41:56.548725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:41:56 crc kubenswrapper[4751]: I0131 14:41:56.548794 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:41:56 crc kubenswrapper[4751]: I0131 14:41:56.548821 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:41:56 crc kubenswrapper[4751]: I0131 14:41:56.549902 4751 scope.go:117] "RemoveContainer" containerID="16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2" Jan 31 14:41:56 crc kubenswrapper[4751]: I0131 14:41:56.809309 4751 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.234966 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.239331 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.337056 4751 apiserver.go:52] "Watching apiserver" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.340450 4751 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.340975 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c"] Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.341555 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.341923 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.342066 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.342172 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.342224 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.342281 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.342276 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.342353 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.342389 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.343909 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.344139 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.344489 4751 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.344952 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.345164 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.345187 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.346370 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.347025 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.347566 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.348491 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.357611 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 05:22:54.656469349 +0000 UTC Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.366477 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.382448 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.404987 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.415676 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422299 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422343 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422367 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422389 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422408 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422428 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422448 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422468 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422490 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422539 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422560 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422580 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422606 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422627 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422648 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422669 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422690 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422748 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422765 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422786 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422808 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422829 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422850 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422869 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422889 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422912 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422921 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422943 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.422939 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423028 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423066 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423132 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423166 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423200 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423238 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423272 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423307 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423341 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423376 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423419 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423510 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423563 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423615 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423667 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423718 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423758 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423791 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423829 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423863 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423896 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423929 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423960 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423994 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.424030 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.427358 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.427403 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.427444 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423100 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.427509 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423195 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423237 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423396 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423591 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.427562 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423645 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423658 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423704 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.423855 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.424020 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.424358 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.424409 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.424440 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.424535 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.424710 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.424729 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.424752 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.424789 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.424952 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.425032 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.425112 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.425133 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.425468 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.425484 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.425521 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.425549 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.425590 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.425704 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.425859 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.425975 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.425994 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.426326 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.425235 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.426484 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.426655 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.426959 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.427134 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.427206 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.427548 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.427910 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.427930 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.427947 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.427965 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.427982 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.427999 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.428013 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.428028 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.428044 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.428061 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.428089 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.428103 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.428117 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.428134 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.428209 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.428234 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.428972 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.429167 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.429573 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.429622 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.429828 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.429848 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.429864 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.429882 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.429897 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.429912 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.429926 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.429943 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.429958 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.429974 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.429989 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.430004 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.430023 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.430040 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.430056 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.430088 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431421 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431440 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431562 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431586 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431625 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431644 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431661 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431676 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431692 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431707 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431723 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431742 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431758 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431775 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431792 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431807 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431823 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431839 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431855 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431872 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431902 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431919 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431935 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432649 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432668 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432683 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432698 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432713 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432727 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432742 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432757 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432774 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432790 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432805 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432831 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432854 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432870 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432885 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432900 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432915 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432929 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.433008 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.432609 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.433277 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.433295 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.434197 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.434246 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.434339 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.434372 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.434409 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.434996 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.435117 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.435153 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.435186 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.435218 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.435251 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.435286 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436346 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436377 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436398 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436417 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436434 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436451 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436469 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436487 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436505 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436524 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436542 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436558 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436578 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436595 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436645 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436662 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436679 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436696 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436718 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436734 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436752 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436768 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436784 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436799 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436815 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436831 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436846 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436861 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436875 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436892 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436909 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436924 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.439029 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.439148 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.439229 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.439265 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.439379 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440187 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440233 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440268 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440302 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440339 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440383 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440417 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440450 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440484 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440520 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440556 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440602 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440637 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440670 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440704 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440739 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440807 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440848 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440885 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440925 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.440965 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441004 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441042 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441103 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441140 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441178 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441211 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441245 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441283 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441323 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441413 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441438 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441461 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441481 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441501 4751 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441522 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441541 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441561 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441581 4751 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441600 4751 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441621 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441641 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441660 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441679 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441699 4751 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441719 4751 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441756 4751 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441775 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441796 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441815 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441834 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441854 4751 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441874 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441892 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441913 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441933 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441952 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441971 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.441991 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442011 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442032 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442052 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442096 4751 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442118 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442139 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442157 4751 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442177 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442196 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442215 4751 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442235 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442364 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442388 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442408 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442615 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.442638 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.443190 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.443232 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.443246 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.430295 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.430348 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.430382 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.430820 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.430864 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431036 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431199 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.431354 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.435277 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.435586 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.435626 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.435734 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436013 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436122 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436231 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436255 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436682 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.436796 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.437177 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.437328 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.437383 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.437470 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.438634 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.438878 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.439035 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.439048 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.439060 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.439271 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.439937 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.444847 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.446114 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.446174 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.446227 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.446534 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.446689 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.447026 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.447126 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.447766 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.447963 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.448165 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.448201 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.448358 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.448385 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.448635 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.448887 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.448949 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.449085 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.449248 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.449493 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.449906 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.450425 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.456236 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.456641 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.456719 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.457016 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.457444 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.458294 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.459095 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:41:57.959030628 +0000 UTC m=+22.333743543 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.459791 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.460031 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.460209 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.460626 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.461341 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.462034 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.462150 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.462207 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.462298 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.462584 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.462494 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.462966 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.462805 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.463100 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.463226 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.463655 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.463704 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.463752 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.463860 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.463939 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.464244 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.464153 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.464850 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.464983 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.464999 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.465010 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.465050 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:41:57.965040326 +0000 UTC m=+22.339753211 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.465665 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.465825 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.465826 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.466325 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.466821 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.466653 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.467121 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.474892 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.475596 4751 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.477180 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.478498 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.479134 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.479257 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:41:57.979218136 +0000 UTC m=+22.353931131 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.481151 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.481386 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.481479 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:41:57.981202698 +0000 UTC m=+22.355915583 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.481464 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.481777 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.481967 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.482023 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.482088 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.482537 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.482648 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.482962 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.483177 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.483368 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.483641 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.483843 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.484350 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.484439 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.484660 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.484802 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.484828 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.484844 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.484898 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:41:57.984877944 +0000 UTC m=+22.359590839 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.508247 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.508866 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.508900 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.509376 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.509579 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.510944 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.510999 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.511748 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.511829 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.511951 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.511938 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.511855 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.512066 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.512302 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.512804 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.512969 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.513185 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.513254 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.513462 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.513495 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.513533 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.513870 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.514143 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.515123 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.515394 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.522702 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.522999 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.523304 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.523780 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.523899 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.525734 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.525937 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.526183 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.526215 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.526358 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.526442 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.526468 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.526483 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.526693 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.526756 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.526832 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.527052 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.527115 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.527396 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.527663 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.527886 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.528346 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.530764 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.536412 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.543046 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544708 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544774 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544842 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544855 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544868 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544877 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544885 4751 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544893 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544901 4751 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544909 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544918 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544926 4751 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544934 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544943 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544951 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544959 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544968 4751 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544976 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544985 4751 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.544994 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545002 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545012 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545020 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545028 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545036 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545192 4751 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545204 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545212 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545221 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545229 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545237 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545245 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545254 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545261 4751 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545269 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545277 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545286 4751 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545294 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545302 4751 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545310 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545318 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545326 4751 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545333 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545341 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545350 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545358 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545367 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545379 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545391 4751 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545403 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545415 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545497 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545510 4751 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545520 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545533 4751 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545544 4751 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545558 4751 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545569 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545582 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545592 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545604 4751 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545614 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545627 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545638 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545651 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545662 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545674 4751 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545685 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545699 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545710 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545724 4751 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545735 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545749 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545764 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545780 4751 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545796 4751 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545807 4751 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545819 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545830 4751 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545842 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545856 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545866 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545877 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545883 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545888 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545936 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545945 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545955 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545964 4751 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545974 4751 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545982 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545992 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546003 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546013 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546023 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546033 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546041 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546050 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546057 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546111 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546119 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546129 4751 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546137 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546146 4751 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546154 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546162 4751 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546170 4751 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546178 4751 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546187 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546196 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546203 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546214 4751 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546223 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546231 4751 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546240 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546248 4751 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546259 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546269 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546277 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546285 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546293 4751 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546302 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546310 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546318 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546326 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546334 4751 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546342 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546350 4751 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546359 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546367 4751 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546375 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546384 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546392 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546400 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546409 4751 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546417 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546426 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546434 4751 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546443 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546452 4751 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546460 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546469 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546477 4751 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546485 4751 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546495 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546471 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546503 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.546641 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.545833 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.554000 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.555188 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.555289 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19"} Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.555361 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.555657 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:57 crc kubenswrapper[4751]: E0131 14:41:57.559812 4751 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-controller-manager-crc\" already exists" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.561491 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.566517 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.569752 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.580414 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.592243 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.606339 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.617149 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.628313 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.638601 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.648018 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.648049 4751 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.663172 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.680871 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 31 14:41:57 crc kubenswrapper[4751]: I0131 14:41:57.696178 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 31 14:41:57 crc kubenswrapper[4751]: W0131 14:41:57.711880 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podef543e1b_8068_4ea3_b32a_61027b32e95d.slice/crio-4d3590e243eaca0ee5e56d7c6d15fb2c545a920dec35c8fae50837d96db3b72b WatchSource:0}: Error finding container 4d3590e243eaca0ee5e56d7c6d15fb2c545a920dec35c8fae50837d96db3b72b: Status 404 returned error can't find the container with id 4d3590e243eaca0ee5e56d7c6d15fb2c545a920dec35c8fae50837d96db3b72b Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.051436 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.051534 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.051584 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:41:58 crc kubenswrapper[4751]: E0131 14:41:58.051604 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:41:59.051583869 +0000 UTC m=+23.426296764 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.051633 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.051667 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:41:58 crc kubenswrapper[4751]: E0131 14:41:58.051709 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:41:58 crc kubenswrapper[4751]: E0131 14:41:58.051727 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:41:58 crc kubenswrapper[4751]: E0131 14:41:58.051760 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:41:59.051750113 +0000 UTC m=+23.426463018 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:41:58 crc kubenswrapper[4751]: E0131 14:41:58.051775 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:41:59.051766954 +0000 UTC m=+23.426479849 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:41:58 crc kubenswrapper[4751]: E0131 14:41:58.051837 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:41:58 crc kubenswrapper[4751]: E0131 14:41:58.051853 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:41:58 crc kubenswrapper[4751]: E0131 14:41:58.051865 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:41:58 crc kubenswrapper[4751]: E0131 14:41:58.051895 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:41:59.051886237 +0000 UTC m=+23.426599132 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:41:58 crc kubenswrapper[4751]: E0131 14:41:58.051946 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:41:58 crc kubenswrapper[4751]: E0131 14:41:58.051957 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:41:58 crc kubenswrapper[4751]: E0131 14:41:58.051966 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:41:58 crc kubenswrapper[4751]: E0131 14:41:58.051992 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:41:59.051983459 +0000 UTC m=+23.426696354 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.358639 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 13:47:06.726019216 +0000 UTC Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.412780 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.414154 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.417181 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.418771 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.419959 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.421047 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.422251 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.423422 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.424661 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.425775 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.426777 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.429328 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.430374 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.431489 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.432557 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.433634 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.434762 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.435578 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.436753 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.439456 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.440725 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.442160 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.443148 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.445366 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.446277 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.448121 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.449552 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.450568 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.451843 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.452806 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.453819 4751 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.454025 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.456998 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.458019 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.460301 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.462643 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.463995 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.465160 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.466453 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.467859 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.468810 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.470050 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.472837 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.475508 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.476701 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.479216 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.480619 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.483765 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.485427 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.487490 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.488463 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.489509 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.490869 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.491928 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.560824 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"202411a00c080441e6f714d59fc005cf3be5bb4c7484ec618e42efd4b8389e50"} Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.563847 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27"} Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.563894 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"5bac338294303499772c33b17e0d59dadfd61bbde41282085f90771886819933"} Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.566267 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293"} Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.566379 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3"} Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.566411 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"4d3590e243eaca0ee5e56d7c6d15fb2c545a920dec35c8fae50837d96db3b72b"} Jan 31 14:41:58 crc kubenswrapper[4751]: E0131 14:41:58.576010 4751 kubelet.go:1929] "Failed creating a mirror pod for" err="pods \"kube-apiserver-crc\" already exists" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.591565 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.611190 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.634361 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.655856 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.680866 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.707638 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.734219 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.758522 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.777136 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.828459 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.847573 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.864625 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.883414 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.896712 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.909328 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:58 crc kubenswrapper[4751]: I0131 14:41:58.926894 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:41:58Z is after 2025-08-24T17:21:41Z" Jan 31 14:41:59 crc kubenswrapper[4751]: I0131 14:41:59.060539 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:41:59 crc kubenswrapper[4751]: I0131 14:41:59.060645 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:41:59 crc kubenswrapper[4751]: I0131 14:41:59.060696 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:41:59 crc kubenswrapper[4751]: I0131 14:41:59.060736 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:41:59 crc kubenswrapper[4751]: I0131 14:41:59.060772 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.060870 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.060938 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:01.060916911 +0000 UTC m=+25.435629836 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.061485 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:42:01.061466185 +0000 UTC m=+25.436179110 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.061604 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.061651 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.061672 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.061719 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:01.061705132 +0000 UTC m=+25.436418057 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.061792 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.061857 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:01.061844435 +0000 UTC m=+25.436557360 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.061941 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.061965 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.061981 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.062019 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:01.062007219 +0000 UTC m=+25.436720144 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:41:59 crc kubenswrapper[4751]: I0131 14:41:59.358832 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 19:04:15.936801103 +0000 UTC Jan 31 14:41:59 crc kubenswrapper[4751]: I0131 14:41:59.405746 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:41:59 crc kubenswrapper[4751]: I0131 14:41:59.405804 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:41:59 crc kubenswrapper[4751]: I0131 14:41:59.405894 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.406026 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.406258 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:41:59 crc kubenswrapper[4751]: E0131 14:41:59.406380 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:00 crc kubenswrapper[4751]: I0131 14:42:00.359981 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:49:07.422974436 +0000 UTC Jan 31 14:42:00 crc kubenswrapper[4751]: I0131 14:42:00.726620 4751 csr.go:261] certificate signing request csr-h8p2w is approved, waiting to be issued Jan 31 14:42:00 crc kubenswrapper[4751]: I0131 14:42:00.778544 4751 csr.go:257] certificate signing request csr-h8p2w is issued Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.077199 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.077292 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.077369 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:42:05.077331528 +0000 UTC m=+29.452044403 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.077380 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.077433 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:05.077426481 +0000 UTC m=+29.452139366 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.077473 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.077536 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.077585 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.077654 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.077703 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:05.077689797 +0000 UTC m=+29.452402682 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.077757 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.077777 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.077783 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.077837 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.077852 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.077933 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:05.077910213 +0000 UTC m=+29.452623098 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.077792 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.077985 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:05.077978375 +0000 UTC m=+29.452691260 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.361155 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 09:46:24.694883137 +0000 UTC Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.405280 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.405298 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.405516 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.405594 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.406025 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:01 crc kubenswrapper[4751]: E0131 14:42:01.406323 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.576112 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf"} Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.639462 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.653555 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-2wpj7"] Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.654063 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.656038 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.656620 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.656721 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.663055 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-68hvr"] Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.663370 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-68hvr" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.663843 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.668424 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.668484 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.668515 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.668479 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.681214 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.698417 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.713291 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.727221 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.738958 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.750641 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.764041 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.776624 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.779867 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-31 14:37:00 +0000 UTC, rotation deadline is 2026-10-14 12:04:05.425901842 +0000 UTC Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.779964 4751 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 6141h22m3.645941634s for next certificate rotation Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.784242 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4c170e8-22c9-43a9-8b34-9d626c2ccddc-proxy-tls\") pod \"machine-config-daemon-2wpj7\" (UID: \"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\") " pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.784273 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4c170e8-22c9-43a9-8b34-9d626c2ccddc-mcd-auth-proxy-config\") pod \"machine-config-daemon-2wpj7\" (UID: \"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\") " pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.784300 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b4c170e8-22c9-43a9-8b34-9d626c2ccddc-rootfs\") pod \"machine-config-daemon-2wpj7\" (UID: \"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\") " pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.784324 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fv47c\" (UniqueName: \"kubernetes.io/projected/b4c170e8-22c9-43a9-8b34-9d626c2ccddc-kube-api-access-fv47c\") pod \"machine-config-daemon-2wpj7\" (UID: \"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\") " pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.784351 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/658471aa-68b2-478e-9522-ef5533009174-hosts-file\") pod \"node-resolver-68hvr\" (UID: \"658471aa-68b2-478e-9522-ef5533009174\") " pod="openshift-dns/node-resolver-68hvr" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.784365 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbl8x\" (UniqueName: \"kubernetes.io/projected/658471aa-68b2-478e-9522-ef5533009174-kube-api-access-nbl8x\") pod \"node-resolver-68hvr\" (UID: \"658471aa-68b2-478e-9522-ef5533009174\") " pod="openshift-dns/node-resolver-68hvr" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.789100 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.801201 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.814283 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.827403 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.837378 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.852155 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.877152 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.885126 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fv47c\" (UniqueName: \"kubernetes.io/projected/b4c170e8-22c9-43a9-8b34-9d626c2ccddc-kube-api-access-fv47c\") pod \"machine-config-daemon-2wpj7\" (UID: \"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\") " pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.885210 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/658471aa-68b2-478e-9522-ef5533009174-hosts-file\") pod \"node-resolver-68hvr\" (UID: \"658471aa-68b2-478e-9522-ef5533009174\") " pod="openshift-dns/node-resolver-68hvr" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.885248 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nbl8x\" (UniqueName: \"kubernetes.io/projected/658471aa-68b2-478e-9522-ef5533009174-kube-api-access-nbl8x\") pod \"node-resolver-68hvr\" (UID: \"658471aa-68b2-478e-9522-ef5533009174\") " pod="openshift-dns/node-resolver-68hvr" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.885302 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4c170e8-22c9-43a9-8b34-9d626c2ccddc-proxy-tls\") pod \"machine-config-daemon-2wpj7\" (UID: \"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\") " pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.885338 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4c170e8-22c9-43a9-8b34-9d626c2ccddc-mcd-auth-proxy-config\") pod \"machine-config-daemon-2wpj7\" (UID: \"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\") " pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.885414 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b4c170e8-22c9-43a9-8b34-9d626c2ccddc-rootfs\") pod \"machine-config-daemon-2wpj7\" (UID: \"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\") " pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.885425 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/658471aa-68b2-478e-9522-ef5533009174-hosts-file\") pod \"node-resolver-68hvr\" (UID: \"658471aa-68b2-478e-9522-ef5533009174\") " pod="openshift-dns/node-resolver-68hvr" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.885503 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/b4c170e8-22c9-43a9-8b34-9d626c2ccddc-rootfs\") pod \"machine-config-daemon-2wpj7\" (UID: \"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\") " pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.886190 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4c170e8-22c9-43a9-8b34-9d626c2ccddc-mcd-auth-proxy-config\") pod \"machine-config-daemon-2wpj7\" (UID: \"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\") " pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.891179 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4c170e8-22c9-43a9-8b34-9d626c2ccddc-proxy-tls\") pod \"machine-config-daemon-2wpj7\" (UID: \"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\") " pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.899535 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.907280 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fv47c\" (UniqueName: \"kubernetes.io/projected/b4c170e8-22c9-43a9-8b34-9d626c2ccddc-kube-api-access-fv47c\") pod \"machine-config-daemon-2wpj7\" (UID: \"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\") " pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.915543 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbl8x\" (UniqueName: \"kubernetes.io/projected/658471aa-68b2-478e-9522-ef5533009174-kube-api-access-nbl8x\") pod \"node-resolver-68hvr\" (UID: \"658471aa-68b2-478e-9522-ef5533009174\") " pod="openshift-dns/node-resolver-68hvr" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.921175 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.969155 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:42:01 crc kubenswrapper[4751]: I0131 14:42:01.977816 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-68hvr" Jan 31 14:42:01 crc kubenswrapper[4751]: W0131 14:42:01.982665 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb4c170e8_22c9_43a9_8b34_9d626c2ccddc.slice/crio-05d1d264c2b92daddfdd471c8e47a1f262c2bb5d610c43c79d89680bbd0aeaad WatchSource:0}: Error finding container 05d1d264c2b92daddfdd471c8e47a1f262c2bb5d610c43c79d89680bbd0aeaad: Status 404 returned error can't find the container with id 05d1d264c2b92daddfdd471c8e47a1f262c2bb5d610c43c79d89680bbd0aeaad Jan 31 14:42:01 crc kubenswrapper[4751]: W0131 14:42:01.997743 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod658471aa_68b2_478e_9522_ef5533009174.slice/crio-e2832894341213686ed02201435b0fa24e74d13736b436632757af4a77da3862 WatchSource:0}: Error finding container e2832894341213686ed02201435b0fa24e74d13736b436632757af4a77da3862: Status 404 returned error can't find the container with id e2832894341213686ed02201435b0fa24e74d13736b436632757af4a77da3862 Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.053564 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-rp5sb"] Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.054408 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-rtthp"] Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.054647 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.055175 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.056598 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.057054 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.057391 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.058824 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.059133 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.059344 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.059931 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.072012 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.084719 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.092974 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.103641 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.117800 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.132710 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.148283 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.164568 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.179545 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187149 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c5353863-ec39-4357-9b86-9be42ca17916-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187195 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-hostroot\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187240 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e7dd989b-33df-4562-a60b-f273428fea3d-multus-daemon-config\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187261 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-run-k8s-cni-cncf-io\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187313 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-etc-kubernetes\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187331 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-multus-cni-dir\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187347 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-cnibin\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187364 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-var-lib-kubelet\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187493 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-run-netns\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187562 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-multus-conf-dir\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187610 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c5353863-ec39-4357-9b86-9be42ca17916-system-cni-dir\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187659 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-run-multus-certs\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187683 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-system-cni-dir\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187704 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-os-release\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187730 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-multus-socket-dir-parent\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187832 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c5353863-ec39-4357-9b86-9be42ca17916-cni-binary-copy\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187896 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-var-lib-cni-multus\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187919 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgxmq\" (UniqueName: \"kubernetes.io/projected/c5353863-ec39-4357-9b86-9be42ca17916-kube-api-access-tgxmq\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.187989 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-var-lib-cni-bin\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.188030 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c5353863-ec39-4357-9b86-9be42ca17916-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.188115 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hwrtf\" (UniqueName: \"kubernetes.io/projected/e7dd989b-33df-4562-a60b-f273428fea3d-kube-api-access-hwrtf\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.188153 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c5353863-ec39-4357-9b86-9be42ca17916-cnibin\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.188185 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c5353863-ec39-4357-9b86-9be42ca17916-os-release\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.188236 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7dd989b-33df-4562-a60b-f273428fea3d-cni-binary-copy\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.193954 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.212928 4751 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.214781 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.214820 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.214832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.214977 4751 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.217900 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.221273 4751 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.221532 4751 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.222569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.222634 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.222647 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.222664 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.222677 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:02Z","lastTransitionTime":"2026-01-31T14:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.231355 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.240757 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: E0131 14:42:02.243306 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.247168 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.247217 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.247229 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.247257 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.247271 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:02Z","lastTransitionTime":"2026-01-31T14:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.253586 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: E0131 14:42:02.258438 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.261262 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.261307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.261321 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.261337 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.261350 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:02Z","lastTransitionTime":"2026-01-31T14:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.266886 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: E0131 14:42:02.271943 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.275523 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.275550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.275557 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.275570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.275581 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:02Z","lastTransitionTime":"2026-01-31T14:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.278371 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.288865 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c5353863-ec39-4357-9b86-9be42ca17916-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.288911 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hwrtf\" (UniqueName: \"kubernetes.io/projected/e7dd989b-33df-4562-a60b-f273428fea3d-kube-api-access-hwrtf\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.288956 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c5353863-ec39-4357-9b86-9be42ca17916-cnibin\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.289087 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/c5353863-ec39-4357-9b86-9be42ca17916-cnibin\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.288999 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c5353863-ec39-4357-9b86-9be42ca17916-os-release\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.289234 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/c5353863-ec39-4357-9b86-9be42ca17916-os-release\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: E0131 14:42:02.289017 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.289300 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7dd989b-33df-4562-a60b-f273428fea3d-cni-binary-copy\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.289684 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c5353863-ec39-4357-9b86-9be42ca17916-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.289896 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c5353863-ec39-4357-9b86-9be42ca17916-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.289977 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/e7dd989b-33df-4562-a60b-f273428fea3d-cni-binary-copy\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.290012 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c5353863-ec39-4357-9b86-9be42ca17916-tuning-conf-dir\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.290120 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-hostroot\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.290060 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-hostroot\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.290199 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e7dd989b-33df-4562-a60b-f273428fea3d-multus-daemon-config\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.290427 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-run-k8s-cni-cncf-io\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.290763 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/e7dd989b-33df-4562-a60b-f273428fea3d-multus-daemon-config\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.290850 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-run-k8s-cni-cncf-io\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.290894 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-etc-kubernetes\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.290921 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-multus-cni-dir\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.290942 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-cnibin\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.290963 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-var-lib-kubelet\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.290984 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-run-netns\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291006 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-multus-conf-dir\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291012 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-etc-kubernetes\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291026 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c5353863-ec39-4357-9b86-9be42ca17916-system-cni-dir\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291048 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-run-netns\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291051 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-run-multus-certs\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291057 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-cnibin\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291093 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-system-cni-dir\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291145 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-os-release\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291094 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-multus-conf-dir\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291157 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-run-multus-certs\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291144 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/c5353863-ec39-4357-9b86-9be42ca17916-system-cni-dir\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291177 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-var-lib-kubelet\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291215 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-os-release\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291226 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-system-cni-dir\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291189 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-multus-socket-dir-parent\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291262 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c5353863-ec39-4357-9b86-9be42ca17916-cni-binary-copy\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291266 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-multus-cni-dir\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291278 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-multus-socket-dir-parent\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291299 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-var-lib-cni-multus\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291323 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tgxmq\" (UniqueName: \"kubernetes.io/projected/c5353863-ec39-4357-9b86-9be42ca17916-kube-api-access-tgxmq\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291350 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-var-lib-cni-bin\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291357 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-var-lib-cni-multus\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291398 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/e7dd989b-33df-4562-a60b-f273428fea3d-host-var-lib-cni-bin\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.291864 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/c5353863-ec39-4357-9b86-9be42ca17916-cni-binary-copy\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.292515 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.292898 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.292921 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.292929 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.292943 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.292952 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:02Z","lastTransitionTime":"2026-01-31T14:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:02 crc kubenswrapper[4751]: E0131 14:42:02.306673 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: E0131 14:42:02.307103 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.307455 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hwrtf\" (UniqueName: \"kubernetes.io/projected/e7dd989b-33df-4562-a60b-f273428fea3d-kube-api-access-hwrtf\") pod \"multus-rtthp\" (UID: \"e7dd989b-33df-4562-a60b-f273428fea3d\") " pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.309118 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.309188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.309203 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.309221 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.309239 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:02Z","lastTransitionTime":"2026-01-31T14:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.311554 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tgxmq\" (UniqueName: \"kubernetes.io/projected/c5353863-ec39-4357-9b86-9be42ca17916-kube-api-access-tgxmq\") pod \"multus-additional-cni-plugins-rp5sb\" (UID: \"c5353863-ec39-4357-9b86-9be42ca17916\") " pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.314774 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.327686 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.339272 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.352900 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.361755 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 22:46:45.908673933 +0000 UTC Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.368102 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-rtthp" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.368704 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.374097 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" Jan 31 14:42:02 crc kubenswrapper[4751]: W0131 14:42:02.377805 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode7dd989b_33df_4562_a60b_f273428fea3d.slice/crio-ae820f722f84ee3f3836bd980ddcedf25d9d1ac2247797066e9c1251fe6f89a0 WatchSource:0}: Error finding container ae820f722f84ee3f3836bd980ddcedf25d9d1ac2247797066e9c1251fe6f89a0: Status 404 returned error can't find the container with id ae820f722f84ee3f3836bd980ddcedf25d9d1ac2247797066e9c1251fe6f89a0 Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.381717 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.396682 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.411757 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.411799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.411811 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.411826 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.411836 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:02Z","lastTransitionTime":"2026-01-31T14:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.419620 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8cdt"] Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.420560 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.427365 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.427555 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.427673 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.427716 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.427739 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.427841 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.428059 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.441314 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.453956 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.464877 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.475950 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.490216 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.504154 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.516026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.516062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.516181 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.516203 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.516216 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:02Z","lastTransitionTime":"2026-01-31T14:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.516529 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.535705 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.553900 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.569033 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.580739 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" event={"ID":"c5353863-ec39-4357-9b86-9be42ca17916","Type":"ContainerStarted","Data":"27d2e140808d508ff439e4cbc7870480463c92690227b2c341c4d3b77b5e3e73"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.581696 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.583108 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rtthp" event={"ID":"e7dd989b-33df-4562-a60b-f273428fea3d","Type":"ContainerStarted","Data":"7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.583163 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rtthp" event={"ID":"e7dd989b-33df-4562-a60b-f273428fea3d","Type":"ContainerStarted","Data":"ae820f722f84ee3f3836bd980ddcedf25d9d1ac2247797066e9c1251fe6f89a0"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.585271 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-68hvr" event={"ID":"658471aa-68b2-478e-9522-ef5533009174","Type":"ContainerStarted","Data":"a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.585330 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-68hvr" event={"ID":"658471aa-68b2-478e-9522-ef5533009174","Type":"ContainerStarted","Data":"e2832894341213686ed02201435b0fa24e74d13736b436632757af4a77da3862"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.589832 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerStarted","Data":"d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.589877 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerStarted","Data":"3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.589889 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerStarted","Data":"05d1d264c2b92daddfdd471c8e47a1f262c2bb5d610c43c79d89680bbd0aeaad"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.593823 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-var-lib-openvswitch\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.593850 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-openvswitch\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.593866 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovnkube-script-lib\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.593881 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-slash\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.593897 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovn-node-metrics-cert\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.593913 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-run-netns\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.593925 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-ovn\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.593941 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-etc-openvswitch\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.593956 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-cni-netd\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.593970 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-kubelet\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.594029 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-run-ovn-kubernetes\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.594112 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.594155 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-node-log\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.594182 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-log-socket\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.594212 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-systemd-units\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.594259 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovnkube-config\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.594289 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-env-overrides\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.594319 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-cni-bin\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.594342 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zhmb7\" (UniqueName: \"kubernetes.io/projected/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-kube-api-access-zhmb7\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.594362 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-systemd\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.597347 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.619384 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.619427 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.619438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.619458 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.619471 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:02Z","lastTransitionTime":"2026-01-31T14:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.619698 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.634166 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.647026 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.661509 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.673677 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.689914 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695531 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-run-netns\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695570 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-ovn\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695596 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-etc-openvswitch\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695619 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-cni-netd\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695641 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-etc-openvswitch\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695643 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-kubelet\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695669 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-kubelet\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695661 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-run-netns\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695699 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-cni-netd\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695702 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-run-ovn-kubernetes\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695619 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-ovn\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695748 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695882 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-node-log\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695905 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-log-socket\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695914 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695974 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-systemd-units\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695989 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-log-socket\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695999 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovnkube-config\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.695999 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-run-ovn-kubernetes\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696038 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-node-log\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696023 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-env-overrides\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696015 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-systemd-units\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696238 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-systemd\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696276 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-cni-bin\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696298 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zhmb7\" (UniqueName: \"kubernetes.io/projected/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-kube-api-access-zhmb7\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696341 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-openvswitch\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696362 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovnkube-script-lib\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696397 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-var-lib-openvswitch\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696411 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-systemd\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696437 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-slash\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696451 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-var-lib-openvswitch\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696464 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovn-node-metrics-cert\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696469 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-openvswitch\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696478 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-slash\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696671 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-env-overrides\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696729 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovnkube-config\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.696716 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-cni-bin\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.697148 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovnkube-script-lib\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.699767 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovn-node-metrics-cert\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.709265 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.712172 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zhmb7\" (UniqueName: \"kubernetes.io/projected/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-kube-api-access-zhmb7\") pod \"ovnkube-node-n8cdt\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.722313 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.722358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.722370 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.722389 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.722401 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:02Z","lastTransitionTime":"2026-01-31T14:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.724243 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.733528 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.749713 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.753979 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:02 crc kubenswrapper[4751]: W0131 14:42:02.770153 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podceef6ba7_8d2d_4105_beee_6a8bdfd12c9b.slice/crio-4bbc5e8f3ce6775d094673644f5cb7355eba674b33cab2a960c6b275357e72b8 WatchSource:0}: Error finding container 4bbc5e8f3ce6775d094673644f5cb7355eba674b33cab2a960c6b275357e72b8: Status 404 returned error can't find the container with id 4bbc5e8f3ce6775d094673644f5cb7355eba674b33cab2a960c6b275357e72b8 Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.794652 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.824086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.824124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.824133 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.824157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.824167 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:02Z","lastTransitionTime":"2026-01-31T14:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.832539 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.870682 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.911472 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.926647 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.926697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.926705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.926720 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:02 crc kubenswrapper[4751]: I0131 14:42:02.926728 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:02Z","lastTransitionTime":"2026-01-31T14:42:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.029435 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.029473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.029483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.029496 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.029505 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:03Z","lastTransitionTime":"2026-01-31T14:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.132655 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.132698 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.132713 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.132730 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.132741 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:03Z","lastTransitionTime":"2026-01-31T14:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.235984 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.236516 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.236526 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.236552 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.236568 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:03Z","lastTransitionTime":"2026-01-31T14:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.338958 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.339001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.339010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.339024 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.339033 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:03Z","lastTransitionTime":"2026-01-31T14:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.362514 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 21:44:09.525256209 +0000 UTC Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.405086 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.405125 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.405151 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:03 crc kubenswrapper[4751]: E0131 14:42:03.405229 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:03 crc kubenswrapper[4751]: E0131 14:42:03.405329 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:03 crc kubenswrapper[4751]: E0131 14:42:03.405408 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.441925 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.441966 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.441979 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.442001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.442014 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:03Z","lastTransitionTime":"2026-01-31T14:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.545210 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.545273 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.545292 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.545316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.545348 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:03Z","lastTransitionTime":"2026-01-31T14:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.595281 4751 generic.go:334] "Generic (PLEG): container finished" podID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerID="122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9" exitCode=0 Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.595399 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerDied","Data":"122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9"} Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.595479 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerStarted","Data":"4bbc5e8f3ce6775d094673644f5cb7355eba674b33cab2a960c6b275357e72b8"} Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.597197 4751 generic.go:334] "Generic (PLEG): container finished" podID="c5353863-ec39-4357-9b86-9be42ca17916" containerID="53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39" exitCode=0 Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.597233 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" event={"ID":"c5353863-ec39-4357-9b86-9be42ca17916","Type":"ContainerDied","Data":"53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39"} Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.617012 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.637787 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.648111 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.648158 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.648174 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.648197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.648211 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:03Z","lastTransitionTime":"2026-01-31T14:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.653396 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.684087 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.699531 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.716879 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.729896 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.742433 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.751860 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.751919 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.751931 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.751947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.751959 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:03Z","lastTransitionTime":"2026-01-31T14:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.757351 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.769432 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.784792 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.802888 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.823914 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.838368 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.851622 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.855415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.855467 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.855479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.855494 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.855525 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:03Z","lastTransitionTime":"2026-01-31T14:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.876316 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.890389 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.907971 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.912645 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-lxrfr"] Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.912995 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lxrfr" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.915576 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.915662 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.915550 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.915670 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.927373 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.948832 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.960163 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.960201 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.960213 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.960231 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.960242 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:03Z","lastTransitionTime":"2026-01-31T14:42:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.962775 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.977877 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:03 crc kubenswrapper[4751]: I0131 14:42:03.989237 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:03Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.002377 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.015610 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.031282 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.062324 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.062358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.062366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.062378 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.062386 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:04Z","lastTransitionTime":"2026-01-31T14:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.071447 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.108705 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b895e2a-7887-41c3-b641-9c72bb085dda-host\") pod \"node-ca-lxrfr\" (UID: \"6b895e2a-7887-41c3-b641-9c72bb085dda\") " pod="openshift-image-registry/node-ca-lxrfr" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.108738 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9hbh\" (UniqueName: \"kubernetes.io/projected/6b895e2a-7887-41c3-b641-9c72bb085dda-kube-api-access-s9hbh\") pod \"node-ca-lxrfr\" (UID: \"6b895e2a-7887-41c3-b641-9c72bb085dda\") " pod="openshift-image-registry/node-ca-lxrfr" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.108774 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6b895e2a-7887-41c3-b641-9c72bb085dda-serviceca\") pod \"node-ca-lxrfr\" (UID: \"6b895e2a-7887-41c3-b641-9c72bb085dda\") " pod="openshift-image-registry/node-ca-lxrfr" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.109789 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.150036 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.164652 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.164687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.164695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.164709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.164719 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:04Z","lastTransitionTime":"2026-01-31T14:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.188312 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.209822 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6b895e2a-7887-41c3-b641-9c72bb085dda-serviceca\") pod \"node-ca-lxrfr\" (UID: \"6b895e2a-7887-41c3-b641-9c72bb085dda\") " pod="openshift-image-registry/node-ca-lxrfr" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.209874 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b895e2a-7887-41c3-b641-9c72bb085dda-host\") pod \"node-ca-lxrfr\" (UID: \"6b895e2a-7887-41c3-b641-9c72bb085dda\") " pod="openshift-image-registry/node-ca-lxrfr" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.209896 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s9hbh\" (UniqueName: \"kubernetes.io/projected/6b895e2a-7887-41c3-b641-9c72bb085dda-kube-api-access-s9hbh\") pod \"node-ca-lxrfr\" (UID: \"6b895e2a-7887-41c3-b641-9c72bb085dda\") " pod="openshift-image-registry/node-ca-lxrfr" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.210037 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/6b895e2a-7887-41c3-b641-9c72bb085dda-host\") pod \"node-ca-lxrfr\" (UID: \"6b895e2a-7887-41c3-b641-9c72bb085dda\") " pod="openshift-image-registry/node-ca-lxrfr" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.210671 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/6b895e2a-7887-41c3-b641-9c72bb085dda-serviceca\") pod \"node-ca-lxrfr\" (UID: \"6b895e2a-7887-41c3-b641-9c72bb085dda\") " pod="openshift-image-registry/node-ca-lxrfr" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.239001 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.261583 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s9hbh\" (UniqueName: \"kubernetes.io/projected/6b895e2a-7887-41c3-b641-9c72bb085dda-kube-api-access-s9hbh\") pod \"node-ca-lxrfr\" (UID: \"6b895e2a-7887-41c3-b641-9c72bb085dda\") " pod="openshift-image-registry/node-ca-lxrfr" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.267090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.267127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.267136 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.267152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.267161 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:04Z","lastTransitionTime":"2026-01-31T14:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.289497 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.330951 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.362731 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 18:13:54.043065574 +0000 UTC Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.369444 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.369470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.369479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.369492 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.369502 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:04Z","lastTransitionTime":"2026-01-31T14:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.371790 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.411414 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.454182 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.471867 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.471904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.471914 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.471929 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.471940 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:04Z","lastTransitionTime":"2026-01-31T14:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.493016 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.528243 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-lxrfr" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.531950 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: W0131 14:42:04.540333 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b895e2a_7887_41c3_b641_9c72bb085dda.slice/crio-3321dec98fc03add119161935af375b4c1f47e9db4373c363cff3f4a2824b027 WatchSource:0}: Error finding container 3321dec98fc03add119161935af375b4c1f47e9db4373c363cff3f4a2824b027: Status 404 returned error can't find the container with id 3321dec98fc03add119161935af375b4c1f47e9db4373c363cff3f4a2824b027 Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.571408 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.574263 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.574296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.574305 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.574318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.574328 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:04Z","lastTransitionTime":"2026-01-31T14:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.601849 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lxrfr" event={"ID":"6b895e2a-7887-41c3-b641-9c72bb085dda","Type":"ContainerStarted","Data":"3321dec98fc03add119161935af375b4c1f47e9db4373c363cff3f4a2824b027"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.604263 4751 generic.go:334] "Generic (PLEG): container finished" podID="c5353863-ec39-4357-9b86-9be42ca17916" containerID="c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db" exitCode=0 Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.604331 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" event={"ID":"c5353863-ec39-4357-9b86-9be42ca17916","Type":"ContainerDied","Data":"c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.608814 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerStarted","Data":"701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.608862 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerStarted","Data":"5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.608873 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerStarted","Data":"20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.608881 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerStarted","Data":"f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.608890 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerStarted","Data":"357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.608900 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerStarted","Data":"e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.614584 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.651188 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.676653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.676693 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.676704 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.676720 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.676730 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:04Z","lastTransitionTime":"2026-01-31T14:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.690618 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.730953 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.772392 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.779030 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.779063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.779095 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.779113 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.779124 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:04Z","lastTransitionTime":"2026-01-31T14:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.809291 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.847455 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.882378 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.882426 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.882437 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.882455 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.882468 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:04Z","lastTransitionTime":"2026-01-31T14:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.893977 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.935302 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.967758 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.984562 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.984639 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.984661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.984705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:04 crc kubenswrapper[4751]: I0131 14:42:04.984740 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:04Z","lastTransitionTime":"2026-01-31T14:42:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.025609 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.078420 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.087414 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.087479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.087495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.087523 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.087549 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:05Z","lastTransitionTime":"2026-01-31T14:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.097170 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.117853 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.118028 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:42:13.117999377 +0000 UTC m=+37.492712262 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.118528 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.118582 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.118650 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.118681 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.118691 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.118700 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.118742 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.118750 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.118765 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.118776 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.118781 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:13.118772487 +0000 UTC m=+37.493485362 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.118704 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.118799 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:13.118790987 +0000 UTC m=+37.493503872 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.118815 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:13.118809708 +0000 UTC m=+37.493522593 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.118821 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.118894 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:13.118864879 +0000 UTC m=+37.493577944 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.130842 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.170555 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.191050 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.191092 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.191100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.191114 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.191124 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:05Z","lastTransitionTime":"2026-01-31T14:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.294432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.294543 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.294566 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.294609 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.294626 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:05Z","lastTransitionTime":"2026-01-31T14:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.363630 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 13:26:52.867534864 +0000 UTC Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.398218 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.398301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.398321 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.398352 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.398371 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:05Z","lastTransitionTime":"2026-01-31T14:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.405617 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.405672 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.405750 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.405804 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.405957 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:05 crc kubenswrapper[4751]: E0131 14:42:05.406047 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.502548 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.502628 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.502649 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.502678 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.502701 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:05Z","lastTransitionTime":"2026-01-31T14:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.605735 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.605796 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.605813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.605837 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.605854 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:05Z","lastTransitionTime":"2026-01-31T14:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.616324 4751 generic.go:334] "Generic (PLEG): container finished" podID="c5353863-ec39-4357-9b86-9be42ca17916" containerID="d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e" exitCode=0 Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.616389 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" event={"ID":"c5353863-ec39-4357-9b86-9be42ca17916","Type":"ContainerDied","Data":"d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e"} Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.619652 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-lxrfr" event={"ID":"6b895e2a-7887-41c3-b641-9c72bb085dda","Type":"ContainerStarted","Data":"b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc"} Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.640357 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.659459 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.682206 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.697163 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.709185 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.709226 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.709237 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.709252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.709263 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:05Z","lastTransitionTime":"2026-01-31T14:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.714095 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.728872 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.741732 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.754875 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.769509 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.786099 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.797567 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.811833 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.812721 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.812772 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.812786 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.812806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.812821 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:05Z","lastTransitionTime":"2026-01-31T14:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.825699 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.853517 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.867798 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.882525 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.900657 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.915600 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.915654 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.915671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.915694 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.915710 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:05Z","lastTransitionTime":"2026-01-31T14:42:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.916911 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.931890 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:05 crc kubenswrapper[4751]: I0131 14:42:05.971512 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:05Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.013161 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.017882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.017942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.017962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.017988 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.018008 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:06Z","lastTransitionTime":"2026-01-31T14:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.053320 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.106257 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.122793 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.122858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.122883 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.122913 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.122936 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:06Z","lastTransitionTime":"2026-01-31T14:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.137179 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.174951 4751 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.180326 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Patch \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-console/pods/networking-console-plugin-85b44fc459-gdk6g/status\": read tcp 38.102.83.98:47400->38.102.83.98:6443: use of closed network connection" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.222962 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.225375 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.225559 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.225685 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.225807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.225931 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:06Z","lastTransitionTime":"2026-01-31T14:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.254788 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.298293 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.363928 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-17 21:40:35.355473091 +0000 UTC Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.365199 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.365278 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.365303 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.365335 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.365356 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:06Z","lastTransitionTime":"2026-01-31T14:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.425933 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.445217 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.468008 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.468061 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.468106 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.468134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.468152 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:06Z","lastTransitionTime":"2026-01-31T14:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.480311 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.506904 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.531542 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.553439 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.571248 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.571280 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.571290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.571307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.571318 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:06Z","lastTransitionTime":"2026-01-31T14:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.572024 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.613500 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.626339 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerStarted","Data":"e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c"} Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.629383 4751 generic.go:334] "Generic (PLEG): container finished" podID="c5353863-ec39-4357-9b86-9be42ca17916" containerID="3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e" exitCode=0 Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.630273 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" event={"ID":"c5353863-ec39-4357-9b86-9be42ca17916","Type":"ContainerDied","Data":"3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e"} Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.654890 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.673787 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.673824 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.673837 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.673855 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.673866 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:06Z","lastTransitionTime":"2026-01-31T14:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.694227 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.735200 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.776109 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.776564 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.776591 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.776603 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.776620 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.776634 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:06Z","lastTransitionTime":"2026-01-31T14:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.815642 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.851898 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.879415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.879460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.879470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.879485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.879495 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:06Z","lastTransitionTime":"2026-01-31T14:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.898038 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.933004 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.978762 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.982919 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.983018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.983035 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.983063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:06 crc kubenswrapper[4751]: I0131 14:42:06.983130 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:06Z","lastTransitionTime":"2026-01-31T14:42:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.011956 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.065317 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.085953 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.086034 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.086099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.086131 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.086150 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:07Z","lastTransitionTime":"2026-01-31T14:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.101204 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.139133 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.175968 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.188723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.188792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.188816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.188844 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.188862 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:07Z","lastTransitionTime":"2026-01-31T14:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.216826 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.255954 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.292035 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.292143 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.292169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.292211 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.292242 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:07Z","lastTransitionTime":"2026-01-31T14:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.297383 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.337890 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.364779 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 10:20:01.305474936 +0000 UTC Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.379200 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.397065 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.397423 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.397632 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.397827 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.398013 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:07Z","lastTransitionTime":"2026-01-31T14:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.405130 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:07 crc kubenswrapper[4751]: E0131 14:42:07.405500 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.405584 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:07 crc kubenswrapper[4751]: E0131 14:42:07.405886 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.405643 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:07 crc kubenswrapper[4751]: E0131 14:42:07.406262 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.424800 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.502730 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.503119 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.503263 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.503420 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.503550 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:07Z","lastTransitionTime":"2026-01-31T14:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.606625 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.606651 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.606661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.606677 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.606688 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:07Z","lastTransitionTime":"2026-01-31T14:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.640476 4751 generic.go:334] "Generic (PLEG): container finished" podID="c5353863-ec39-4357-9b86-9be42ca17916" containerID="fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d" exitCode=0 Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.640536 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" event={"ID":"c5353863-ec39-4357-9b86-9be42ca17916","Type":"ContainerDied","Data":"fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d"} Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.654242 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.671950 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.691776 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.704367 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.712514 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.712575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.712595 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.712620 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.712637 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:07Z","lastTransitionTime":"2026-01-31T14:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.716013 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.731682 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.751974 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.765679 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.783218 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.814871 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.815954 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.817098 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.817109 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.817123 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.817134 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:07Z","lastTransitionTime":"2026-01-31T14:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.853912 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.902304 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.920934 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.921169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.921247 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.921365 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.921455 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:07Z","lastTransitionTime":"2026-01-31T14:42:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.929834 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:07 crc kubenswrapper[4751]: I0131 14:42:07.972005 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:07Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.024316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.024377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.024396 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.024423 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.024440 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:08Z","lastTransitionTime":"2026-01-31T14:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.127306 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.127363 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.127380 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.127454 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.127475 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:08Z","lastTransitionTime":"2026-01-31T14:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.230808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.230888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.230915 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.230953 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.230975 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:08Z","lastTransitionTime":"2026-01-31T14:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.334308 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.334379 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.334397 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.334422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.334439 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:08Z","lastTransitionTime":"2026-01-31T14:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.365868 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:10:37.784530189 +0000 UTC Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.437212 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.437267 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.437285 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.437311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.437328 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:08Z","lastTransitionTime":"2026-01-31T14:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.540200 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.540235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.540245 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.540262 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.540275 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:08Z","lastTransitionTime":"2026-01-31T14:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.644149 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.644717 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.644828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.644911 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.644985 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:08Z","lastTransitionTime":"2026-01-31T14:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.652012 4751 generic.go:334] "Generic (PLEG): container finished" podID="c5353863-ec39-4357-9b86-9be42ca17916" containerID="2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc" exitCode=0 Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.652096 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" event={"ID":"c5353863-ec39-4357-9b86-9be42ca17916","Type":"ContainerDied","Data":"2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc"} Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.679229 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.694690 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.725358 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.744234 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.747244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.747357 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.747443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.747576 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.747684 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:08Z","lastTransitionTime":"2026-01-31T14:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.760031 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.779951 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.799429 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.816439 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.833946 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.849782 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.850658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.850716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.850735 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.850760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.850781 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:08Z","lastTransitionTime":"2026-01-31T14:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.869122 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.889745 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.902258 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.914150 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:08Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.952998 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.953040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.953054 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.953094 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:08 crc kubenswrapper[4751]: I0131 14:42:08.953115 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:08Z","lastTransitionTime":"2026-01-31T14:42:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.055859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.055931 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.055951 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.055977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.055996 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:09Z","lastTransitionTime":"2026-01-31T14:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.158972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.159001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.159011 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.159027 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.159038 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:09Z","lastTransitionTime":"2026-01-31T14:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.262278 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.262340 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.262365 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.262393 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.262415 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:09Z","lastTransitionTime":"2026-01-31T14:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.369776 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 19:58:13.040301254 +0000 UTC Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.371537 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.371716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.371745 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.371837 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.371862 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:09Z","lastTransitionTime":"2026-01-31T14:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.405290 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.405365 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.405424 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:09 crc kubenswrapper[4751]: E0131 14:42:09.405739 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:09 crc kubenswrapper[4751]: E0131 14:42:09.405894 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:09 crc kubenswrapper[4751]: E0131 14:42:09.406023 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.476182 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.476252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.476270 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.476297 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.476317 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:09Z","lastTransitionTime":"2026-01-31T14:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.580324 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.580379 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.580395 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.580418 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.580436 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:09Z","lastTransitionTime":"2026-01-31T14:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.662779 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" event={"ID":"c5353863-ec39-4357-9b86-9be42ca17916","Type":"ContainerStarted","Data":"99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e"} Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.672508 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerStarted","Data":"c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794"} Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.673523 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.673775 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.683028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.683115 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.683142 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.683177 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.683205 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:09Z","lastTransitionTime":"2026-01-31T14:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.685222 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.702430 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.757319 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.758822 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.785663 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.785713 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.785730 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.785756 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.785775 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:09Z","lastTransitionTime":"2026-01-31T14:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.803844 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.828961 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.843582 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.857513 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.870824 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.882695 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.888206 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.888255 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.888272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.888301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.888319 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:09Z","lastTransitionTime":"2026-01-31T14:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.898316 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.917414 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.933358 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.955117 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.971208 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.983634 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:09Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.990423 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.990487 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.990506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.990531 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:09 crc kubenswrapper[4751]: I0131 14:42:09.990548 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:09Z","lastTransitionTime":"2026-01-31T14:42:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.004198 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.021046 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.040092 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.054265 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.084527 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.093101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.093152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.093169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.093193 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.093209 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:10Z","lastTransitionTime":"2026-01-31T14:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.099985 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.118988 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.142719 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.174605 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.196003 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.196299 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.196373 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.196454 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.196524 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:10Z","lastTransitionTime":"2026-01-31T14:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.197822 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.219515 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.232328 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.244295 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.260507 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:10Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.299636 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.299685 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.299699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.299717 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.299730 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:10Z","lastTransitionTime":"2026-01-31T14:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.370343 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 06:57:01.065586986 +0000 UTC Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.402773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.403062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.403296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.403525 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.403717 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:10Z","lastTransitionTime":"2026-01-31T14:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.507005 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.507105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.507132 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.507160 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.507184 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:10Z","lastTransitionTime":"2026-01-31T14:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.609841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.609916 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.609941 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.609974 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.609997 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:10Z","lastTransitionTime":"2026-01-31T14:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.676754 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.712765 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.713352 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.713741 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.713929 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.714225 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:10Z","lastTransitionTime":"2026-01-31T14:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.983098 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.983179 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.983205 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.983239 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:10 crc kubenswrapper[4751]: I0131 14:42:10.983276 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:10Z","lastTransitionTime":"2026-01-31T14:42:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.086949 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.087000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.087016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.087039 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.087056 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:11Z","lastTransitionTime":"2026-01-31T14:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.189442 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.189486 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.189503 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.189526 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.189542 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:11Z","lastTransitionTime":"2026-01-31T14:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.291771 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.291805 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.291814 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.291827 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.291836 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:11Z","lastTransitionTime":"2026-01-31T14:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.370635 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 09:28:13.84131058 +0000 UTC Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.393939 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.394170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.394244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.394322 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.394376 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:11Z","lastTransitionTime":"2026-01-31T14:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.405026 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.405084 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.405137 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:11 crc kubenswrapper[4751]: E0131 14:42:11.405453 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:11 crc kubenswrapper[4751]: E0131 14:42:11.405272 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:11 crc kubenswrapper[4751]: E0131 14:42:11.405504 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.496930 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.496962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.496970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.496983 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.496992 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:11Z","lastTransitionTime":"2026-01-31T14:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.599944 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.599978 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.599988 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.600003 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.600015 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:11Z","lastTransitionTime":"2026-01-31T14:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.680295 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.702625 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.702965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.702983 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.703007 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.703024 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:11Z","lastTransitionTime":"2026-01-31T14:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.806236 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.806287 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.806305 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.806329 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.806347 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:11Z","lastTransitionTime":"2026-01-31T14:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.909533 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.909571 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.909588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.909609 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:11 crc kubenswrapper[4751]: I0131 14:42:11.909626 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:11Z","lastTransitionTime":"2026-01-31T14:42:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.011975 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.012028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.012045 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.012096 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.012122 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:12Z","lastTransitionTime":"2026-01-31T14:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.114442 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.114474 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.114486 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.114500 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.114511 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:12Z","lastTransitionTime":"2026-01-31T14:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.217399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.217456 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.217474 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.217498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.217515 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:12Z","lastTransitionTime":"2026-01-31T14:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.319893 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.319946 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.319963 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.319984 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.320000 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:12Z","lastTransitionTime":"2026-01-31T14:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.371660 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 18:01:18.7718094 +0000 UTC Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.422551 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.422613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.422638 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.422667 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.422692 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:12Z","lastTransitionTime":"2026-01-31T14:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.468307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.468368 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.468392 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.468418 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.468441 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:12Z","lastTransitionTime":"2026-01-31T14:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:12 crc kubenswrapper[4751]: E0131 14:42:12.489973 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.494262 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.494319 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.494343 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.494372 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.494392 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:12Z","lastTransitionTime":"2026-01-31T14:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:12 crc kubenswrapper[4751]: E0131 14:42:12.515193 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.521958 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.522021 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.522039 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.522099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.522121 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:12Z","lastTransitionTime":"2026-01-31T14:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:12 crc kubenswrapper[4751]: E0131 14:42:12.546278 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.552758 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.552813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.552831 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.552859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.552878 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:12Z","lastTransitionTime":"2026-01-31T14:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:12 crc kubenswrapper[4751]: E0131 14:42:12.573639 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.583328 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.583447 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.583482 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.583513 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.583542 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:12Z","lastTransitionTime":"2026-01-31T14:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:12 crc kubenswrapper[4751]: E0131 14:42:12.605324 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:12Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: E0131 14:42:12.605578 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.608391 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.608454 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.608473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.608499 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.608519 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:12Z","lastTransitionTime":"2026-01-31T14:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.686976 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/0.log" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.690741 4751 generic.go:334] "Generic (PLEG): container finished" podID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerID="c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794" exitCode=1 Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.690802 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerDied","Data":"c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794"} Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.691993 4751 scope.go:117] "RemoveContainer" containerID="c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.712422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.712476 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.712495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.712519 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.712483 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.712538 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:12Z","lastTransitionTime":"2026-01-31T14:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.726508 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.750059 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:11Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:11.894312 6073 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:11.894354 6073 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:11.894387 6073 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:42:11.894536 6073 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:11.895132 6073 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:11.895157 6073 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:11.895178 6073 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:11.895199 6073 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:11.895209 6073 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:11.895239 6073 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:11.895309 6073 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:42:11.895321 6073 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:11.895334 6073 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:11.895316 6073 factory.go:656] Stopping watch factory\\\\nI0131 14:42:11.895369 6073 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.770524 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.786879 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.800771 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.815250 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.815327 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.815362 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.815381 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.815395 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:12Z","lastTransitionTime":"2026-01-31T14:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.820003 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.836955 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.852348 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.865104 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.882920 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.904134 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.917785 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.917856 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.917872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.917892 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.917930 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:12Z","lastTransitionTime":"2026-01-31T14:42:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.921706 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:12 crc kubenswrapper[4751]: I0131 14:42:12.935240 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:12Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.020920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.020970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.020987 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.021014 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.021036 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:13Z","lastTransitionTime":"2026-01-31T14:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.124021 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.124084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.124098 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.124117 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.124131 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:13Z","lastTransitionTime":"2026-01-31T14:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.202095 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.202205 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.202261 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:42:29.202210367 +0000 UTC m=+53.576923292 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.202307 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.202335 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.202352 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.202365 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.202362 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.202402 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:29.202391841 +0000 UTC m=+53.577104736 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.202422 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.202490 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.202484 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.202508 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.202540 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.202558 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.202526 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:29.202517145 +0000 UTC m=+53.577230040 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.202659 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:29.202622407 +0000 UTC m=+53.577335302 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.202697 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:29.202682069 +0000 UTC m=+53.577395074 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.225941 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.225998 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.226008 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.226024 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.226033 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:13Z","lastTransitionTime":"2026-01-31T14:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.328612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.328639 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.328667 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.328681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.328692 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:13Z","lastTransitionTime":"2026-01-31T14:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.372396 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 08:43:20.213969369 +0000 UTC Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.405037 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.405042 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.405102 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.405304 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.405448 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:13 crc kubenswrapper[4751]: E0131 14:42:13.405531 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.430661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.430712 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.430722 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.430738 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.430747 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:13Z","lastTransitionTime":"2026-01-31T14:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.533621 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.533680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.533697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.533719 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.533736 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:13Z","lastTransitionTime":"2026-01-31T14:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.612066 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.634880 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.640658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.640743 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.640772 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.640831 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.640871 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:13Z","lastTransitionTime":"2026-01-31T14:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.657122 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.677651 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.694691 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.699000 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/0.log" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.703423 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerStarted","Data":"bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a"} Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.703626 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.714404 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:11Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:11.894312 6073 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:11.894354 6073 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:11.894387 6073 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:42:11.894536 6073 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:11.895132 6073 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:11.895157 6073 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:11.895178 6073 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:11.895199 6073 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:11.895209 6073 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:11.895239 6073 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:11.895309 6073 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:42:11.895321 6073 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:11.895334 6073 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:11.895316 6073 factory.go:656] Stopping watch factory\\\\nI0131 14:42:11.895369 6073 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.731200 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.744484 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.744530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.744547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.744573 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.744590 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:13Z","lastTransitionTime":"2026-01-31T14:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.752699 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.772784 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.791767 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.812853 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.833370 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.847410 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.847509 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.847534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.847567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.847593 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:13Z","lastTransitionTime":"2026-01-31T14:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.852638 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.873229 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.896133 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.916693 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.939527 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.950602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.950654 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.950671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.950693 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.950712 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:13Z","lastTransitionTime":"2026-01-31T14:42:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.959537 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.977960 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:13 crc kubenswrapper[4751]: I0131 14:42:13.997911 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:13Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.013878 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.044945 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:11Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:11.894312 6073 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:11.894354 6073 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:11.894387 6073 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:42:11.894536 6073 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:11.895132 6073 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:11.895157 6073 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:11.895178 6073 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:11.895199 6073 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:11.895209 6073 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:11.895239 6073 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:11.895309 6073 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:42:11.895321 6073 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:11.895334 6073 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:11.895316 6073 factory.go:656] Stopping watch factory\\\\nI0131 14:42:11.895369 6073 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.054560 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.054614 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.054632 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.054699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.054721 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:14Z","lastTransitionTime":"2026-01-31T14:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.066646 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.083313 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.108655 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.130581 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.147492 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.157611 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.157682 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.157699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.158194 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.158249 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:14Z","lastTransitionTime":"2026-01-31T14:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.169353 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.189452 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.262379 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.262437 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.262455 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.262479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.262496 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:14Z","lastTransitionTime":"2026-01-31T14:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.366061 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.366186 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.366204 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.366230 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.366246 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:14Z","lastTransitionTime":"2026-01-31T14:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.373382 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 05:43:15.311610298 +0000 UTC Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.377049 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q"] Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.382157 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.385373 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.387339 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.407549 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.413329 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd8c0730-67df-445e-a6ce-c2edce5d9c59-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4v79q\" (UID: \"cd8c0730-67df-445e-a6ce-c2edce5d9c59\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.413599 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd8c0730-67df-445e-a6ce-c2edce5d9c59-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4v79q\" (UID: \"cd8c0730-67df-445e-a6ce-c2edce5d9c59\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.413654 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-45q8c\" (UniqueName: \"kubernetes.io/projected/cd8c0730-67df-445e-a6ce-c2edce5d9c59-kube-api-access-45q8c\") pod \"ovnkube-control-plane-749d76644c-4v79q\" (UID: \"cd8c0730-67df-445e-a6ce-c2edce5d9c59\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.413698 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd8c0730-67df-445e-a6ce-c2edce5d9c59-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4v79q\" (UID: \"cd8c0730-67df-445e-a6ce-c2edce5d9c59\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.424838 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.445009 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.459403 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.472429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.472496 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.472519 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.472545 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.472562 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:14Z","lastTransitionTime":"2026-01-31T14:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.491870 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:11Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:11.894312 6073 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:11.894354 6073 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:11.894387 6073 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:42:11.894536 6073 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:11.895132 6073 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:11.895157 6073 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:11.895178 6073 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:11.895199 6073 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:11.895209 6073 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:11.895239 6073 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:11.895309 6073 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:42:11.895321 6073 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:11.895334 6073 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:11.895316 6073 factory.go:656] Stopping watch factory\\\\nI0131 14:42:11.895369 6073 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.513791 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.514547 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd8c0730-67df-445e-a6ce-c2edce5d9c59-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4v79q\" (UID: \"cd8c0730-67df-445e-a6ce-c2edce5d9c59\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.514614 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-45q8c\" (UniqueName: \"kubernetes.io/projected/cd8c0730-67df-445e-a6ce-c2edce5d9c59-kube-api-access-45q8c\") pod \"ovnkube-control-plane-749d76644c-4v79q\" (UID: \"cd8c0730-67df-445e-a6ce-c2edce5d9c59\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.514662 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd8c0730-67df-445e-a6ce-c2edce5d9c59-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4v79q\" (UID: \"cd8c0730-67df-445e-a6ce-c2edce5d9c59\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.514829 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd8c0730-67df-445e-a6ce-c2edce5d9c59-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4v79q\" (UID: \"cd8c0730-67df-445e-a6ce-c2edce5d9c59\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.517140 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/cd8c0730-67df-445e-a6ce-c2edce5d9c59-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-4v79q\" (UID: \"cd8c0730-67df-445e-a6ce-c2edce5d9c59\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.518673 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/cd8c0730-67df-445e-a6ce-c2edce5d9c59-env-overrides\") pod \"ovnkube-control-plane-749d76644c-4v79q\" (UID: \"cd8c0730-67df-445e-a6ce-c2edce5d9c59\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.524263 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/cd8c0730-67df-445e-a6ce-c2edce5d9c59-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-4v79q\" (UID: \"cd8c0730-67df-445e-a6ce-c2edce5d9c59\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.536866 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.544242 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-45q8c\" (UniqueName: \"kubernetes.io/projected/cd8c0730-67df-445e-a6ce-c2edce5d9c59-kube-api-access-45q8c\") pod \"ovnkube-control-plane-749d76644c-4v79q\" (UID: \"cd8c0730-67df-445e-a6ce-c2edce5d9c59\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.556817 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.574361 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.575751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.575795 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.575813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.575837 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.575855 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:14Z","lastTransitionTime":"2026-01-31T14:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.596663 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.615520 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.635554 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.654034 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.676782 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.678436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.678853 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.679005 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.679211 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.679351 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:14Z","lastTransitionTime":"2026-01-31T14:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.702275 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.703186 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.714469 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/1.log" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.715635 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/0.log" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.720771 4751 generic.go:334] "Generic (PLEG): container finished" podID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerID="bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a" exitCode=1 Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.720823 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerDied","Data":"bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a"} Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.720914 4751 scope.go:117] "RemoveContainer" containerID="c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.722026 4751 scope.go:117] "RemoveContainer" containerID="bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a" Jan 31 14:42:14 crc kubenswrapper[4751]: E0131 14:42:14.722328 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" Jan 31 14:42:14 crc kubenswrapper[4751]: W0131 14:42:14.730616 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcd8c0730_67df_445e_a6ce_c2edce5d9c59.slice/crio-f9785bdb93fca017a08b7a9cef8f01e0a6749cd8cb26f88592770c721ada1695 WatchSource:0}: Error finding container f9785bdb93fca017a08b7a9cef8f01e0a6749cd8cb26f88592770c721ada1695: Status 404 returned error can't find the container with id f9785bdb93fca017a08b7a9cef8f01e0a6749cd8cb26f88592770c721ada1695 Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.746801 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.771570 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.782697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.782760 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.782777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.782803 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.782821 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:14Z","lastTransitionTime":"2026-01-31T14:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.791793 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.812956 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.831245 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.850301 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.869402 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.885944 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.885999 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.886019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.886043 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.886061 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:14Z","lastTransitionTime":"2026-01-31T14:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.887170 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.906378 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.920243 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.939868 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.953013 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.968768 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.985633 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.990614 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.990683 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.990709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.990744 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:14 crc kubenswrapper[4751]: I0131 14:42:14.990769 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:14Z","lastTransitionTime":"2026-01-31T14:42:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.013677 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:11Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:11.894312 6073 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:11.894354 6073 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:11.894387 6073 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:42:11.894536 6073 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:11.895132 6073 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:11.895157 6073 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:11.895178 6073 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:11.895199 6073 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:11.895209 6073 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:11.895239 6073 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:11.895309 6073 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:42:11.895321 6073 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:11.895334 6073 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:11.895316 6073 factory.go:656] Stopping watch factory\\\\nI0131 14:42:11.895369 6073 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:13.656951 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:13.657012 6206 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:13.657046 6206 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:13.657054 6206 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:13.657134 6206 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:13.657145 6206 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:13.657168 6206 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:13.657186 6206 factory.go:656] Stopping watch factory\\\\nI0131 14:42:13.657200 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:13.657232 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:13.657247 6206 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:42:13.657256 6206 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:13.657266 6206 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:13.657275 6206 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:13.657286 6206 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.093863 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.093916 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.093926 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.093939 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.093950 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:15Z","lastTransitionTime":"2026-01-31T14:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.197118 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.197174 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.197191 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.197215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.197231 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:15Z","lastTransitionTime":"2026-01-31T14:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.301413 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.301480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.301512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.301537 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.301559 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:15Z","lastTransitionTime":"2026-01-31T14:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.373624 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 07:01:40.415046217 +0000 UTC Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.404643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.404698 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.404718 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.404743 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.404760 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:15Z","lastTransitionTime":"2026-01-31T14:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.404829 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.404917 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.404951 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:15 crc kubenswrapper[4751]: E0131 14:42:15.405145 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:15 crc kubenswrapper[4751]: E0131 14:42:15.405341 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:15 crc kubenswrapper[4751]: E0131 14:42:15.405475 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.508586 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.508679 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.508703 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.508737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.508760 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:15Z","lastTransitionTime":"2026-01-31T14:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.612048 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.612132 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.612149 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.612174 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.612192 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:15Z","lastTransitionTime":"2026-01-31T14:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.715031 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.715142 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.715161 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.715187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.715208 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:15Z","lastTransitionTime":"2026-01-31T14:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.727181 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" event={"ID":"cd8c0730-67df-445e-a6ce-c2edce5d9c59","Type":"ContainerStarted","Data":"34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53"} Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.727251 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" event={"ID":"cd8c0730-67df-445e-a6ce-c2edce5d9c59","Type":"ContainerStarted","Data":"f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c"} Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.727270 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" event={"ID":"cd8c0730-67df-445e-a6ce-c2edce5d9c59","Type":"ContainerStarted","Data":"f9785bdb93fca017a08b7a9cef8f01e0a6749cd8cb26f88592770c721ada1695"} Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.730880 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/1.log" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.752219 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.778555 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.810212 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.817986 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.818023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.818035 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.818051 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.818063 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:15Z","lastTransitionTime":"2026-01-31T14:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.833601 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.848219 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.859802 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.864871 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-xtn6l"] Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.865548 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:15 crc kubenswrapper[4751]: E0131 14:42:15.865641 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.872618 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.895518 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.908882 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.920564 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.920626 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.920652 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.920680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.920703 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:15Z","lastTransitionTime":"2026-01-31T14:42:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.927516 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs\") pod \"network-metrics-daemon-xtn6l\" (UID: \"68aeb9c7-d3c3-4c34-96ab-bb947421c504\") " pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.927613 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hljmn\" (UniqueName: \"kubernetes.io/projected/68aeb9c7-d3c3-4c34-96ab-bb947421c504-kube-api-access-hljmn\") pod \"network-metrics-daemon-xtn6l\" (UID: \"68aeb9c7-d3c3-4c34-96ab-bb947421c504\") " pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.929413 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.944363 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.961325 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:15 crc kubenswrapper[4751]: I0131 14:42:15.977989 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.000279 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:11Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:11.894312 6073 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:11.894354 6073 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:11.894387 6073 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:42:11.894536 6073 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:11.895132 6073 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:11.895157 6073 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:11.895178 6073 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:11.895199 6073 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:11.895209 6073 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:11.895239 6073 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:11.895309 6073 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:42:11.895321 6073 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:11.895334 6073 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:11.895316 6073 factory.go:656] Stopping watch factory\\\\nI0131 14:42:11.895369 6073 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:13.656951 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:13.657012 6206 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:13.657046 6206 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:13.657054 6206 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:13.657134 6206 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:13.657145 6206 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:13.657168 6206 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:13.657186 6206 factory.go:656] Stopping watch factory\\\\nI0131 14:42:13.657200 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:13.657232 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:13.657247 6206 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:42:13.657256 6206 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:13.657266 6206 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:13.657275 6206 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:13.657286 6206 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:15Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.016748 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.022960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.023012 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.023029 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.023052 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.023106 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:16Z","lastTransitionTime":"2026-01-31T14:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.028733 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs\") pod \"network-metrics-daemon-xtn6l\" (UID: \"68aeb9c7-d3c3-4c34-96ab-bb947421c504\") " pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.028827 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hljmn\" (UniqueName: \"kubernetes.io/projected/68aeb9c7-d3c3-4c34-96ab-bb947421c504-kube-api-access-hljmn\") pod \"network-metrics-daemon-xtn6l\" (UID: \"68aeb9c7-d3c3-4c34-96ab-bb947421c504\") " pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:16 crc kubenswrapper[4751]: E0131 14:42:16.029250 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.038381 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: E0131 14:42:16.038555 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs podName:68aeb9c7-d3c3-4c34-96ab-bb947421c504 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:16.538508711 +0000 UTC m=+40.913221606 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs") pod "network-metrics-daemon-xtn6l" (UID: "68aeb9c7-d3c3-4c34-96ab-bb947421c504") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.055930 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.067200 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hljmn\" (UniqueName: \"kubernetes.io/projected/68aeb9c7-d3c3-4c34-96ab-bb947421c504-kube-api-access-hljmn\") pod \"network-metrics-daemon-xtn6l\" (UID: \"68aeb9c7-d3c3-4c34-96ab-bb947421c504\") " pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.076706 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:11Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:11.894312 6073 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:11.894354 6073 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:11.894387 6073 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:42:11.894536 6073 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:11.895132 6073 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:11.895157 6073 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:11.895178 6073 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:11.895199 6073 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:11.895209 6073 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:11.895239 6073 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:11.895309 6073 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:42:11.895321 6073 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:11.895334 6073 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:11.895316 6073 factory.go:656] Stopping watch factory\\\\nI0131 14:42:11.895369 6073 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:13.656951 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:13.657012 6206 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:13.657046 6206 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:13.657054 6206 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:13.657134 6206 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:13.657145 6206 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:13.657168 6206 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:13.657186 6206 factory.go:656] Stopping watch factory\\\\nI0131 14:42:13.657200 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:13.657232 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:13.657247 6206 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:42:13.657256 6206 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:13.657266 6206 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:13.657275 6206 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:13.657286 6206 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.090740 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.107808 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.121487 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.125826 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.125868 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.125879 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.125898 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.125910 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:16Z","lastTransitionTime":"2026-01-31T14:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.138830 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.154812 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.172381 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.189844 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.209441 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.228546 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.228633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.228661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.228708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.228734 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:16Z","lastTransitionTime":"2026-01-31T14:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.231184 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.248193 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.266721 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.281976 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.299362 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.331575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.331636 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.331653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.331680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.331697 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:16Z","lastTransitionTime":"2026-01-31T14:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.373843 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 08:36:33.92356774 +0000 UTC Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.423996 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.434469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.434545 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.434567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.434593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.434610 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:16Z","lastTransitionTime":"2026-01-31T14:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.441728 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.455893 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.473608 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.489625 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.517291 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:11Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:11.894312 6073 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:11.894354 6073 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:11.894387 6073 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:42:11.894536 6073 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:11.895132 6073 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:11.895157 6073 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:11.895178 6073 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:11.895199 6073 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:11.895209 6073 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:11.895239 6073 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:11.895309 6073 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:42:11.895321 6073 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:11.895334 6073 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:11.895316 6073 factory.go:656] Stopping watch factory\\\\nI0131 14:42:11.895369 6073 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:13.656951 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:13.657012 6206 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:13.657046 6206 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:13.657054 6206 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:13.657134 6206 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:13.657145 6206 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:13.657168 6206 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:13.657186 6206 factory.go:656] Stopping watch factory\\\\nI0131 14:42:13.657200 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:13.657232 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:13.657247 6206 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:42:13.657256 6206 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:13.657266 6206 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:13.657275 6206 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:13.657286 6206 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.538263 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.538336 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.538365 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.538397 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.538419 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:16Z","lastTransitionTime":"2026-01-31T14:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.538331 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.560937 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.580199 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.601847 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.621011 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.633302 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs\") pod \"network-metrics-daemon-xtn6l\" (UID: \"68aeb9c7-d3c3-4c34-96ab-bb947421c504\") " pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:16 crc kubenswrapper[4751]: E0131 14:42:16.633583 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:42:16 crc kubenswrapper[4751]: E0131 14:42:16.633727 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs podName:68aeb9c7-d3c3-4c34-96ab-bb947421c504 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:17.633692489 +0000 UTC m=+42.008405434 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs") pod "network-metrics-daemon-xtn6l" (UID: "68aeb9c7-d3c3-4c34-96ab-bb947421c504") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.637888 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.643588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.643638 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.643656 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.643680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.643697 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:16Z","lastTransitionTime":"2026-01-31T14:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.656157 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.676545 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.697408 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.710164 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.746325 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.746381 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.746397 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.746419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.746436 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:16Z","lastTransitionTime":"2026-01-31T14:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.850312 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.850372 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.850391 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.850420 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.850448 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:16Z","lastTransitionTime":"2026-01-31T14:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.953895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.953961 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.953981 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.954005 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:16 crc kubenswrapper[4751]: I0131 14:42:16.954025 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:16Z","lastTransitionTime":"2026-01-31T14:42:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.057206 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.057281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.057307 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.057336 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.057356 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:17Z","lastTransitionTime":"2026-01-31T14:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.160618 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.160705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.160731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.160763 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.160781 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:17Z","lastTransitionTime":"2026-01-31T14:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.263932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.263991 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.264009 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.264034 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.264055 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:17Z","lastTransitionTime":"2026-01-31T14:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.366392 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.366475 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.366498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.366526 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.366544 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:17Z","lastTransitionTime":"2026-01-31T14:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.374670 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 23:42:51.95951734 +0000 UTC Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.405381 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.405468 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.405475 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.405548 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:17 crc kubenswrapper[4751]: E0131 14:42:17.406363 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:17 crc kubenswrapper[4751]: E0131 14:42:17.406208 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:17 crc kubenswrapper[4751]: E0131 14:42:17.406507 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:17 crc kubenswrapper[4751]: E0131 14:42:17.406634 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.469232 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.469298 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.469315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.469345 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.469363 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:17Z","lastTransitionTime":"2026-01-31T14:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.571916 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.571969 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.571985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.572010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.572028 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:17Z","lastTransitionTime":"2026-01-31T14:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.642124 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs\") pod \"network-metrics-daemon-xtn6l\" (UID: \"68aeb9c7-d3c3-4c34-96ab-bb947421c504\") " pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:17 crc kubenswrapper[4751]: E0131 14:42:17.642354 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:42:17 crc kubenswrapper[4751]: E0131 14:42:17.642458 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs podName:68aeb9c7-d3c3-4c34-96ab-bb947421c504 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:19.642427097 +0000 UTC m=+44.017140012 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs") pod "network-metrics-daemon-xtn6l" (UID: "68aeb9c7-d3c3-4c34-96ab-bb947421c504") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.675659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.675718 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.675735 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.675761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.675780 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:17Z","lastTransitionTime":"2026-01-31T14:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.778874 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.778934 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.778956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.778985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.779008 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:17Z","lastTransitionTime":"2026-01-31T14:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.882410 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.882469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.882571 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.882607 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.882631 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:17Z","lastTransitionTime":"2026-01-31T14:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.984918 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.984959 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.984971 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.984989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:17 crc kubenswrapper[4751]: I0131 14:42:17.985002 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:17Z","lastTransitionTime":"2026-01-31T14:42:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.088179 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.088269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.088290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.088314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.088330 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:18Z","lastTransitionTime":"2026-01-31T14:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.191722 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.191774 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.191792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.191816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.191836 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:18Z","lastTransitionTime":"2026-01-31T14:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.294530 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.294615 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.294636 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.294659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.294677 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:18Z","lastTransitionTime":"2026-01-31T14:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.374846 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 15:37:36.825698702 +0000 UTC Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.397237 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.397286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.397302 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.397324 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.397342 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:18Z","lastTransitionTime":"2026-01-31T14:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.500399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.500464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.500486 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.500532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.500557 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:18Z","lastTransitionTime":"2026-01-31T14:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.603473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.603542 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.603560 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.603584 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.603602 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:18Z","lastTransitionTime":"2026-01-31T14:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.713707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.713758 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.713770 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.713787 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.713798 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:18Z","lastTransitionTime":"2026-01-31T14:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.816432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.816490 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.816512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.816542 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.816564 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:18Z","lastTransitionTime":"2026-01-31T14:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.919925 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.919998 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.920021 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.920049 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:18 crc kubenswrapper[4751]: I0131 14:42:18.920125 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:18Z","lastTransitionTime":"2026-01-31T14:42:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.023358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.023427 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.023445 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.023470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.023487 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:19Z","lastTransitionTime":"2026-01-31T14:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.127773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.127863 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.127881 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.127905 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.127923 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:19Z","lastTransitionTime":"2026-01-31T14:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.231225 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.231285 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.231301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.231324 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.231343 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:19Z","lastTransitionTime":"2026-01-31T14:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.334580 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.334644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.334663 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.334688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.334706 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:19Z","lastTransitionTime":"2026-01-31T14:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.375586 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 16:51:05.085805033 +0000 UTC Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.404908 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.404987 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:19 crc kubenswrapper[4751]: E0131 14:42:19.405423 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.405063 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:19 crc kubenswrapper[4751]: E0131 14:42:19.405539 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.405042 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:19 crc kubenswrapper[4751]: E0131 14:42:19.405629 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:19 crc kubenswrapper[4751]: E0131 14:42:19.405434 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.437168 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.437207 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.437215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.437229 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.437239 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:19Z","lastTransitionTime":"2026-01-31T14:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.540362 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.540435 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.540457 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.540488 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.540509 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:19Z","lastTransitionTime":"2026-01-31T14:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.643553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.643623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.643640 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.643665 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.643685 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:19Z","lastTransitionTime":"2026-01-31T14:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.665218 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs\") pod \"network-metrics-daemon-xtn6l\" (UID: \"68aeb9c7-d3c3-4c34-96ab-bb947421c504\") " pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:19 crc kubenswrapper[4751]: E0131 14:42:19.665485 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:42:19 crc kubenswrapper[4751]: E0131 14:42:19.665613 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs podName:68aeb9c7-d3c3-4c34-96ab-bb947421c504 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:23.665584711 +0000 UTC m=+48.040297636 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs") pod "network-metrics-daemon-xtn6l" (UID: "68aeb9c7-d3c3-4c34-96ab-bb947421c504") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.747236 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.747547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.747821 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.748167 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.748730 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:19Z","lastTransitionTime":"2026-01-31T14:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.852101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.852152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.852170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.852193 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.852212 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:19Z","lastTransitionTime":"2026-01-31T14:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.955704 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.955766 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.955784 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.955808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:19 crc kubenswrapper[4751]: I0131 14:42:19.955825 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:19Z","lastTransitionTime":"2026-01-31T14:42:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.060063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.060431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.060628 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.060847 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.060993 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:20Z","lastTransitionTime":"2026-01-31T14:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.164328 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.164777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.165042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.165634 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.166157 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:20Z","lastTransitionTime":"2026-01-31T14:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.269273 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.269624 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.269870 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.270141 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.270302 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:20Z","lastTransitionTime":"2026-01-31T14:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.374521 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.374584 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.374602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.374633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.374651 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:20Z","lastTransitionTime":"2026-01-31T14:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.375715 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 09:10:57.416654753 +0000 UTC Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.477311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.477376 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.477398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.477428 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.477453 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:20Z","lastTransitionTime":"2026-01-31T14:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.580188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.580238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.580254 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.580277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.580295 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:20Z","lastTransitionTime":"2026-01-31T14:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.683868 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.683925 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.683941 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.683964 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.683982 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:20Z","lastTransitionTime":"2026-01-31T14:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.787244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.787301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.787319 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.787344 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.787360 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:20Z","lastTransitionTime":"2026-01-31T14:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.889899 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.889951 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.889969 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.889993 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.890010 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:20Z","lastTransitionTime":"2026-01-31T14:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.992639 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.992704 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.992721 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.992744 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:20 crc kubenswrapper[4751]: I0131 14:42:20.992761 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:20Z","lastTransitionTime":"2026-01-31T14:42:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.098634 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.098687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.098702 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.098723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.098743 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:21Z","lastTransitionTime":"2026-01-31T14:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.201482 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.201548 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.201574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.201601 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.201623 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:21Z","lastTransitionTime":"2026-01-31T14:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.305191 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.305549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.305721 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.305934 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.306154 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:21Z","lastTransitionTime":"2026-01-31T14:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.376019 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 17:21:43.91524379 +0000 UTC Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.405592 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:21 crc kubenswrapper[4751]: E0131 14:42:21.405999 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.405722 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:21 crc kubenswrapper[4751]: E0131 14:42:21.406396 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.405726 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.405664 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:21 crc kubenswrapper[4751]: E0131 14:42:21.406891 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:21 crc kubenswrapper[4751]: E0131 14:42:21.407052 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.411420 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.411498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.411515 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.411534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.411549 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:21Z","lastTransitionTime":"2026-01-31T14:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.520215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.520573 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.520747 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.521002 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.521248 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:21Z","lastTransitionTime":"2026-01-31T14:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.624977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.625027 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.625043 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.625101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.625119 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:21Z","lastTransitionTime":"2026-01-31T14:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.727943 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.728023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.728044 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.728107 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.728125 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:21Z","lastTransitionTime":"2026-01-31T14:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.831153 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.831643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.831780 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.831898 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.832015 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:21Z","lastTransitionTime":"2026-01-31T14:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.935345 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.935664 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.935810 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.935946 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:21 crc kubenswrapper[4751]: I0131 14:42:21.936062 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:21Z","lastTransitionTime":"2026-01-31T14:42:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.038817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.038865 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.038887 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.038917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.038937 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:22Z","lastTransitionTime":"2026-01-31T14:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.142473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.142527 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.142544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.142566 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.142584 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:22Z","lastTransitionTime":"2026-01-31T14:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.246042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.246134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.246152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.246176 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.246194 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:22Z","lastTransitionTime":"2026-01-31T14:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.349423 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.349483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.349505 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.349539 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.349561 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:22Z","lastTransitionTime":"2026-01-31T14:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.377453 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 09:31:55.38334444 +0000 UTC Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.452891 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.452972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.452989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.453012 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.453030 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:22Z","lastTransitionTime":"2026-01-31T14:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.556421 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.556516 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.556536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.556562 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.556579 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:22Z","lastTransitionTime":"2026-01-31T14:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.660259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.660314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.660333 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.660355 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.660372 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:22Z","lastTransitionTime":"2026-01-31T14:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.762668 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.762715 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.762732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.762754 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.762771 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:22Z","lastTransitionTime":"2026-01-31T14:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.788368 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.788441 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.788463 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.788492 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.788514 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:22Z","lastTransitionTime":"2026-01-31T14:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:22 crc kubenswrapper[4751]: E0131 14:42:22.811634 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:22Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.817264 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.817339 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.817366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.817395 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.817416 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:22Z","lastTransitionTime":"2026-01-31T14:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:22 crc kubenswrapper[4751]: E0131 14:42:22.833614 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:22Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.838596 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.838664 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.838687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.838717 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.838737 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:22Z","lastTransitionTime":"2026-01-31T14:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:22 crc kubenswrapper[4751]: E0131 14:42:22.861179 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:22Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.867198 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.867316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.867361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.867397 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.867421 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:22Z","lastTransitionTime":"2026-01-31T14:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:22 crc kubenswrapper[4751]: E0131 14:42:22.886693 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:22Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.891312 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.891528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.891556 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.891579 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.891597 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:22Z","lastTransitionTime":"2026-01-31T14:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:22 crc kubenswrapper[4751]: E0131 14:42:22.907385 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:22Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:22Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:22 crc kubenswrapper[4751]: E0131 14:42:22.907605 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.910001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.910063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.910111 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.910135 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:22 crc kubenswrapper[4751]: I0131 14:42:22.910154 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:22Z","lastTransitionTime":"2026-01-31T14:42:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.012908 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.012987 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.013010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.013043 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.013065 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:23Z","lastTransitionTime":"2026-01-31T14:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.116351 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.116707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.116792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.116886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.116961 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:23Z","lastTransitionTime":"2026-01-31T14:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.219562 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.219637 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.219656 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.219681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.219698 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:23Z","lastTransitionTime":"2026-01-31T14:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.322740 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.322810 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.322832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.322859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.322876 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:23Z","lastTransitionTime":"2026-01-31T14:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.378456 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 17:41:18.316144547 +0000 UTC Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.405151 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.405181 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.405211 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.405367 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:23 crc kubenswrapper[4751]: E0131 14:42:23.405522 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:23 crc kubenswrapper[4751]: E0131 14:42:23.405641 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:23 crc kubenswrapper[4751]: E0131 14:42:23.405774 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:23 crc kubenswrapper[4751]: E0131 14:42:23.405898 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.425299 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.425357 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.425407 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.425430 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.425447 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:23Z","lastTransitionTime":"2026-01-31T14:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.528347 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.528442 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.528493 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.528519 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.528573 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:23Z","lastTransitionTime":"2026-01-31T14:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.631469 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.631514 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.631524 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.631538 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.631546 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:23Z","lastTransitionTime":"2026-01-31T14:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.718237 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs\") pod \"network-metrics-daemon-xtn6l\" (UID: \"68aeb9c7-d3c3-4c34-96ab-bb947421c504\") " pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:23 crc kubenswrapper[4751]: E0131 14:42:23.718497 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:42:23 crc kubenswrapper[4751]: E0131 14:42:23.718650 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs podName:68aeb9c7-d3c3-4c34-96ab-bb947421c504 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:31.718588932 +0000 UTC m=+56.093301857 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs") pod "network-metrics-daemon-xtn6l" (UID: "68aeb9c7-d3c3-4c34-96ab-bb947421c504") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.734663 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.734725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.734743 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.734767 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.734785 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:23Z","lastTransitionTime":"2026-01-31T14:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.837794 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.837844 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.837859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.837877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.837888 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:23Z","lastTransitionTime":"2026-01-31T14:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.941515 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.941589 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.941609 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.941633 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:23 crc kubenswrapper[4751]: I0131 14:42:23.941649 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:23Z","lastTransitionTime":"2026-01-31T14:42:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.045161 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.045230 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.045253 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.045281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.045302 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:24Z","lastTransitionTime":"2026-01-31T14:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.148141 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.148214 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.148238 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.148267 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.148290 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:24Z","lastTransitionTime":"2026-01-31T14:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.251528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.251579 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.251598 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.251621 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.251638 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:24Z","lastTransitionTime":"2026-01-31T14:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.354812 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.354894 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.354917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.354947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.354968 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:24Z","lastTransitionTime":"2026-01-31T14:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.379188 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:40:19.095011238 +0000 UTC Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.458414 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.458527 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.458601 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.458631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.458701 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:24Z","lastTransitionTime":"2026-01-31T14:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.561535 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.561590 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.561607 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.561628 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.561646 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:24Z","lastTransitionTime":"2026-01-31T14:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.664911 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.664962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.664978 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.665000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.665019 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:24Z","lastTransitionTime":"2026-01-31T14:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.767736 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.767779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.767795 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.767817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.767835 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:24Z","lastTransitionTime":"2026-01-31T14:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.871296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.871367 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.871388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.871416 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.871437 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:24Z","lastTransitionTime":"2026-01-31T14:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.974155 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.974219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.974244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.974286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:24 crc kubenswrapper[4751]: I0131 14:42:24.974308 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:24Z","lastTransitionTime":"2026-01-31T14:42:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.076654 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.076724 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.076744 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.076769 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.076787 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:25Z","lastTransitionTime":"2026-01-31T14:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.179433 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.179489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.179507 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.179558 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.179573 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:25Z","lastTransitionTime":"2026-01-31T14:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.282435 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.282497 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.282514 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.282537 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.282566 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:25Z","lastTransitionTime":"2026-01-31T14:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.379789 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 08:17:31.895660533 +0000 UTC Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.385359 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.385412 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.385429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.385452 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.385469 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:25Z","lastTransitionTime":"2026-01-31T14:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.404811 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.404859 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.404950 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:25 crc kubenswrapper[4751]: E0131 14:42:25.405016 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.405041 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:25 crc kubenswrapper[4751]: E0131 14:42:25.405204 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:25 crc kubenswrapper[4751]: E0131 14:42:25.405315 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:25 crc kubenswrapper[4751]: E0131 14:42:25.405410 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.488586 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.488631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.488647 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.488669 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.488688 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:25Z","lastTransitionTime":"2026-01-31T14:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.591395 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.591465 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.591490 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.591521 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.591542 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:25Z","lastTransitionTime":"2026-01-31T14:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.694768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.694811 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.694822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.694840 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.694853 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:25Z","lastTransitionTime":"2026-01-31T14:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.797942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.798011 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.798029 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.798054 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.798101 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:25Z","lastTransitionTime":"2026-01-31T14:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.900627 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.900656 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.900664 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.900677 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:25 crc kubenswrapper[4751]: I0131 14:42:25.900686 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:25Z","lastTransitionTime":"2026-01-31T14:42:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.003519 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.003575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.003593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.003682 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.003704 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:26Z","lastTransitionTime":"2026-01-31T14:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.106954 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.107016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.107035 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.107059 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.107103 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:26Z","lastTransitionTime":"2026-01-31T14:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.209648 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.209687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.209698 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.209714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.209726 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:26Z","lastTransitionTime":"2026-01-31T14:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.312588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.312648 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.312664 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.312687 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.312704 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:26Z","lastTransitionTime":"2026-01-31T14:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.380513 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 12:14:25.956807236 +0000 UTC Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.415439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.415498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.415515 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.415539 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.415556 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:26Z","lastTransitionTime":"2026-01-31T14:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.426773 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.443152 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.467196 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c599de37a76f4a9f00441a0b18a38e5315e42c49b315308d22f67c4cc68a8794\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:11Z\\\",\\\"message\\\":\\\"d (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:11.894312 6073 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:11.894354 6073 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:11.894387 6073 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 14:42:11.894536 6073 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:11.895132 6073 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:11.895157 6073 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:11.895178 6073 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:11.895199 6073 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:11.895209 6073 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:11.895239 6073 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:11.895309 6073 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:42:11.895321 6073 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:11.895334 6073 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:11.895316 6073 factory.go:656] Stopping watch factory\\\\nI0131 14:42:11.895369 6073 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:13.656951 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:13.657012 6206 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:13.657046 6206 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:13.657054 6206 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:13.657134 6206 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:13.657145 6206 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:13.657168 6206 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:13.657186 6206 factory.go:656] Stopping watch factory\\\\nI0131 14:42:13.657200 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:13.657232 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:13.657247 6206 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:42:13.657256 6206 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:13.657266 6206 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:13.657275 6206 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:13.657286 6206 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:12Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.490502 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.511021 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.518771 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.518834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.518852 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.518876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.518894 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:26Z","lastTransitionTime":"2026-01-31T14:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.531536 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.548531 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.570871 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.588537 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.611037 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.622991 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.623377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.623577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.623791 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.623919 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:26Z","lastTransitionTime":"2026-01-31T14:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.632684 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.657463 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.683960 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.706896 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.722539 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.726533 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.726697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.726820 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.726941 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.727044 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:26Z","lastTransitionTime":"2026-01-31T14:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.739703 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:26Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.830156 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.830209 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.830227 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.830250 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.830268 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:26Z","lastTransitionTime":"2026-01-31T14:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.933421 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.933777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.934088 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.934213 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:26 crc kubenswrapper[4751]: I0131 14:42:26.934320 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:26Z","lastTransitionTime":"2026-01-31T14:42:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.036948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.037271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.037411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.037769 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.038045 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:27Z","lastTransitionTime":"2026-01-31T14:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.140608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.140637 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.140645 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.140657 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.140666 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:27Z","lastTransitionTime":"2026-01-31T14:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.242872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.242934 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.242950 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.242972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.242986 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:27Z","lastTransitionTime":"2026-01-31T14:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.345988 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.346094 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.346112 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.346134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.346153 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:27Z","lastTransitionTime":"2026-01-31T14:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.381592 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 15:09:14.647209082 +0000 UTC Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.405512 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.405589 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.405532 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.405518 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:27 crc kubenswrapper[4751]: E0131 14:42:27.405690 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:27 crc kubenswrapper[4751]: E0131 14:42:27.405897 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:27 crc kubenswrapper[4751]: E0131 14:42:27.405972 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:27 crc kubenswrapper[4751]: E0131 14:42:27.406097 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.448466 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.448493 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.448502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.448516 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.448525 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:27Z","lastTransitionTime":"2026-01-31T14:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.551393 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.551436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.551449 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.551465 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.551479 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:27Z","lastTransitionTime":"2026-01-31T14:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.654340 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.654409 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.654430 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.654459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.654480 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:27Z","lastTransitionTime":"2026-01-31T14:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.757711 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.757785 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.757806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.757832 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.757849 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:27Z","lastTransitionTime":"2026-01-31T14:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.860581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.860631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.860648 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.860671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.860719 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:27Z","lastTransitionTime":"2026-01-31T14:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.963734 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.963785 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.963801 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.963826 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:27 crc kubenswrapper[4751]: I0131 14:42:27.963843 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:27Z","lastTransitionTime":"2026-01-31T14:42:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.066732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.066811 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.066834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.066865 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.066890 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:28Z","lastTransitionTime":"2026-01-31T14:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.170518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.170592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.170611 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.170635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.170652 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:28Z","lastTransitionTime":"2026-01-31T14:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.273835 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.273897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.273921 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.273949 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.273972 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:28Z","lastTransitionTime":"2026-01-31T14:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.377381 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.377450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.377467 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.377491 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.377512 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:28Z","lastTransitionTime":"2026-01-31T14:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.382149 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 02:16:35.597995444 +0000 UTC Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.406230 4751 scope.go:117] "RemoveContainer" containerID="bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.428410 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.459566 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.476769 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.479664 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.479691 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.479698 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.479711 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.479719 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:28Z","lastTransitionTime":"2026-01-31T14:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.486785 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.497937 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.507194 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.516882 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.529317 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.539219 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.562088 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:13.656951 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:13.657012 6206 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:13.657046 6206 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:13.657054 6206 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:13.657134 6206 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:13.657145 6206 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:13.657168 6206 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:13.657186 6206 factory.go:656] Stopping watch factory\\\\nI0131 14:42:13.657200 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:13.657232 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:13.657247 6206 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:42:13.657256 6206 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:13.657266 6206 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:13.657275 6206 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:13.657286 6206 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.579519 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.582374 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.582398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.582406 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.582419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.582427 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:28Z","lastTransitionTime":"2026-01-31T14:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.597814 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.616492 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.633329 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.648143 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.678251 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.703085 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.703126 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.703138 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.703155 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.703166 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:28Z","lastTransitionTime":"2026-01-31T14:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.791701 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/1.log" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.808415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.808493 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.808518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.808547 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.808571 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:28Z","lastTransitionTime":"2026-01-31T14:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.808717 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerStarted","Data":"307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e"} Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.808922 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.825465 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.848235 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.863879 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.883778 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.898410 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.911056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.911144 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.911152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.911165 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.911173 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:28Z","lastTransitionTime":"2026-01-31T14:42:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.912144 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.917115 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.924919 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.927025 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.938354 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.957917 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:13.656951 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:13.657012 6206 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:13.657046 6206 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:13.657054 6206 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:13.657134 6206 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:13.657145 6206 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:13.657168 6206 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:13.657186 6206 factory.go:656] Stopping watch factory\\\\nI0131 14:42:13.657200 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:13.657232 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:13.657247 6206 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:42:13.657256 6206 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:13.657266 6206 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:13.657275 6206 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:13.657286 6206 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.973432 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:28 crc kubenswrapper[4751]: I0131 14:42:28.994623 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:28Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.013571 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.013608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.013616 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.013632 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.013642 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:29Z","lastTransitionTime":"2026-01-31T14:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.023929 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.072785 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.086523 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.098566 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.108010 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.116173 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.116206 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.116215 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.116228 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.116238 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:29Z","lastTransitionTime":"2026-01-31T14:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.120303 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.129338 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.145628 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:13.656951 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:13.657012 6206 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:13.657046 6206 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:13.657054 6206 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:13.657134 6206 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:13.657145 6206 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:13.657168 6206 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:13.657186 6206 factory.go:656] Stopping watch factory\\\\nI0131 14:42:13.657200 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:13.657232 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:13.657247 6206 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:42:13.657256 6206 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:13.657266 6206 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:13.657275 6206 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:13.657286 6206 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.155736 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.165779 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.177300 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.188309 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64468352-f9fe-48bb-b204-b9f828c06bf8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56c1e31014f9e3d0be8140f58cff1c752ad4be1c6c60a942bc18320bbd37b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a7c5739a571e6f3ec88c3798ad2604382b9320c44ddda3d41681a64c6ab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a6478c4477b785bcb405d597f1c835faaf4ef7adb3a2bcd6e70cc2e692f44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.206323 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.218679 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.218824 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.218853 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.218863 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.218877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.218885 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:29Z","lastTransitionTime":"2026-01-31T14:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.230708 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.241818 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.253662 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.270574 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.278976 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.279103 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.279125 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.279147 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.279166 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.279251 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.279248 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.279281 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.279289 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:43:01.279277823 +0000 UTC m=+85.653990708 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.279294 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.279398 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.279408 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.279411 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.279424 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.279444 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:43:01.279438418 +0000 UTC m=+85.654151303 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.279468 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:43:01.279453348 +0000 UTC m=+85.654166233 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.279491 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:43:01.279485429 +0000 UTC m=+85.654198314 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.279551 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:43:01.2795449 +0000 UTC m=+85.654257785 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.282387 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.304318 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.315378 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.320866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.320930 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.320947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.320969 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.320986 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:29Z","lastTransitionTime":"2026-01-31T14:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.330620 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.383213 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 17:31:37.820269618 +0000 UTC Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.405665 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.405712 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.405767 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.405771 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.405844 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.405931 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.406131 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.406247 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.424232 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.424281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.424297 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.424316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.424333 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:29Z","lastTransitionTime":"2026-01-31T14:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.527284 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.527342 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.527358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.527380 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.527397 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:29Z","lastTransitionTime":"2026-01-31T14:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.630043 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.630139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.630157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.630181 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.630199 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:29Z","lastTransitionTime":"2026-01-31T14:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.732617 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.732681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.732699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.732725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.732743 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:29Z","lastTransitionTime":"2026-01-31T14:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.815243 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/2.log" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.816363 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/1.log" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.820708 4751 generic.go:334] "Generic (PLEG): container finished" podID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerID="307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e" exitCode=1 Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.820786 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerDied","Data":"307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e"} Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.820890 4751 scope.go:117] "RemoveContainer" containerID="bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.822547 4751 scope.go:117] "RemoveContainer" containerID="307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e" Jan 31 14:42:29 crc kubenswrapper[4751]: E0131 14:42:29.822798 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.835858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.835896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.835914 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.835937 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.835955 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:29Z","lastTransitionTime":"2026-01-31T14:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.842462 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.861979 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.883425 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.906106 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.923884 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.942327 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.942408 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.942431 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.942457 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.942484 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:29Z","lastTransitionTime":"2026-01-31T14:42:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.945717 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.961817 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.978783 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:29 crc kubenswrapper[4751]: I0131 14:42:29.992955 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:29Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.008984 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.041426 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bbb2056238b11891fff35b2c74e6a809f7d369d116a25803ac32725568d9461a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"message\\\":\\\"e (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0131 14:42:13.656951 6206 handler.go:190] Sending *v1.NetworkPolicy event handler 4 for removal\\\\nI0131 14:42:13.657012 6206 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0131 14:42:13.657046 6206 handler.go:190] Sending *v1.Node event handler 2 for removal\\\\nI0131 14:42:13.657054 6206 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0131 14:42:13.657134 6206 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0131 14:42:13.657145 6206 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0131 14:42:13.657168 6206 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0131 14:42:13.657186 6206 factory.go:656] Stopping watch factory\\\\nI0131 14:42:13.657200 6206 ovnkube.go:599] Stopped ovnkube\\\\nI0131 14:42:13.657232 6206 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:42:13.657247 6206 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:42:13.657256 6206 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:42:13.657266 6206 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:42:13.657275 6206 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:42:13.657286 6206 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0131 1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:12Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:29Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 14:42:29.388826 6425 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:42:29.388860 6425 services_controller.go:444] Built service openshift-etcd/etcd LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:42:29.388874 6425 services_controller.go:445] Built service openshift-etcd/etcd LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:42:29.388002 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.050328 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.050376 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.050393 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.050417 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.050436 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:30Z","lastTransitionTime":"2026-01-31T14:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.057121 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.079128 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.096970 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64468352-f9fe-48bb-b204-b9f828c06bf8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56c1e31014f9e3d0be8140f58cff1c752ad4be1c6c60a942bc18320bbd37b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a7c5739a571e6f3ec88c3798ad2604382b9320c44ddda3d41681a64c6ab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a6478c4477b785bcb405d597f1c835faaf4ef7adb3a2bcd6e70cc2e692f44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.114476 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.132335 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.152491 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:30Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.153130 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.153283 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.153409 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.153507 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.153631 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:30Z","lastTransitionTime":"2026-01-31T14:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.256740 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.256789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.256806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.256830 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.256846 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:30Z","lastTransitionTime":"2026-01-31T14:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.359809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.360216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.360342 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.360496 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.360634 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:30Z","lastTransitionTime":"2026-01-31T14:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.384164 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 17:57:17.079998756 +0000 UTC Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.464145 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.464265 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.464288 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.464316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.464342 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:30Z","lastTransitionTime":"2026-01-31T14:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.568130 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.568208 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.568232 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.568264 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.568290 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:30Z","lastTransitionTime":"2026-01-31T14:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.672826 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.672908 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.672934 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.672968 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.672995 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:30Z","lastTransitionTime":"2026-01-31T14:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.775636 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.775681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.775693 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.775709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.775722 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:30Z","lastTransitionTime":"2026-01-31T14:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.826388 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/2.log" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.878273 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.878332 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.878349 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.878372 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.878389 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:30Z","lastTransitionTime":"2026-01-31T14:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.981651 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.981711 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.981728 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.981752 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:30 crc kubenswrapper[4751]: I0131 14:42:30.981769 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:30Z","lastTransitionTime":"2026-01-31T14:42:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.084651 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.084711 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.084728 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.084752 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.084770 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:31Z","lastTransitionTime":"2026-01-31T14:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.187871 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.187927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.187939 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.187960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.187975 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:31Z","lastTransitionTime":"2026-01-31T14:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.290808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.290880 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.290898 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.290924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.290943 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:31Z","lastTransitionTime":"2026-01-31T14:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.385325 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-04 00:26:51.693853654 +0000 UTC Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.393325 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.393383 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.393402 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.393426 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.393443 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:31Z","lastTransitionTime":"2026-01-31T14:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.404905 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.404952 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.404960 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:31 crc kubenswrapper[4751]: E0131 14:42:31.405596 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.405014 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:31 crc kubenswrapper[4751]: E0131 14:42:31.405741 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:31 crc kubenswrapper[4751]: E0131 14:42:31.405866 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:31 crc kubenswrapper[4751]: E0131 14:42:31.406261 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.496272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.496669 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.496880 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.497133 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.497305 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:31Z","lastTransitionTime":"2026-01-31T14:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.611590 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.612318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.612345 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.612377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.612399 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:31Z","lastTransitionTime":"2026-01-31T14:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.716178 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.716480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.716671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.716821 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.716988 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:31Z","lastTransitionTime":"2026-01-31T14:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.816065 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs\") pod \"network-metrics-daemon-xtn6l\" (UID: \"68aeb9c7-d3c3-4c34-96ab-bb947421c504\") " pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:31 crc kubenswrapper[4751]: E0131 14:42:31.816365 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:42:31 crc kubenswrapper[4751]: E0131 14:42:31.816775 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs podName:68aeb9c7-d3c3-4c34-96ab-bb947421c504 nodeName:}" failed. No retries permitted until 2026-01-31 14:42:47.81674671 +0000 UTC m=+72.191459625 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs") pod "network-metrics-daemon-xtn6l" (UID: "68aeb9c7-d3c3-4c34-96ab-bb947421c504") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.819550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.819722 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.819839 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.819976 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.820177 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:31Z","lastTransitionTime":"2026-01-31T14:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.924243 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.924539 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.924716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.924869 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:31 crc kubenswrapper[4751]: I0131 14:42:31.925003 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:31Z","lastTransitionTime":"2026-01-31T14:42:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.028393 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.028472 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.028496 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.028527 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.028550 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:32Z","lastTransitionTime":"2026-01-31T14:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.131171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.131543 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.132114 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.132516 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.132823 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:32Z","lastTransitionTime":"2026-01-31T14:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.236276 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.236631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.236868 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.237169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.237391 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:32Z","lastTransitionTime":"2026-01-31T14:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.340947 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.341305 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.341446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.341581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.341721 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:32Z","lastTransitionTime":"2026-01-31T14:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.385837 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 11:22:14.223147652 +0000 UTC Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.445742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.446100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.447495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.448001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.448338 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:32Z","lastTransitionTime":"2026-01-31T14:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.551835 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.551897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.551974 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.551998 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.552016 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:32Z","lastTransitionTime":"2026-01-31T14:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.610672 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.612322 4751 scope.go:117] "RemoveContainer" containerID="307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e" Jan 31 14:42:32 crc kubenswrapper[4751]: E0131 14:42:32.612675 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.627464 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.647226 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.656192 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.656243 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.656259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.656282 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.656299 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:32Z","lastTransitionTime":"2026-01-31T14:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.668455 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.692965 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.711516 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.733613 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.752161 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.759765 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.759864 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.759883 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.759925 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.759942 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:32Z","lastTransitionTime":"2026-01-31T14:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.772652 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.794098 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.812915 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.836638 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:29Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 14:42:29.388826 6425 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:42:29.388860 6425 services_controller.go:444] Built service openshift-etcd/etcd LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:42:29.388874 6425 services_controller.go:445] Built service openshift-etcd/etcd LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:42:29.388002 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.860582 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.862400 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.862444 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.862461 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.862485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.862503 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:32Z","lastTransitionTime":"2026-01-31T14:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.884401 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.905368 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.919565 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64468352-f9fe-48bb-b204-b9f828c06bf8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56c1e31014f9e3d0be8140f58cff1c752ad4be1c6c60a942bc18320bbd37b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a7c5739a571e6f3ec88c3798ad2604382b9320c44ddda3d41681a64c6ab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a6478c4477b785bcb405d597f1c835faaf4ef7adb3a2bcd6e70cc2e692f44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.942282 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.961008 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:32Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.965256 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.965289 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.965311 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.965329 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:32 crc kubenswrapper[4751]: I0131 14:42:32.965341 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:32Z","lastTransitionTime":"2026-01-31T14:42:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.068399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.068461 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.068479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.068504 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.068521 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:33Z","lastTransitionTime":"2026-01-31T14:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.139642 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.139699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.139741 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.139765 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.139783 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:33Z","lastTransitionTime":"2026-01-31T14:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:33 crc kubenswrapper[4751]: E0131 14:42:33.161797 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:33Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.166929 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.166978 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.166995 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.167018 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.167037 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:33Z","lastTransitionTime":"2026-01-31T14:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:33 crc kubenswrapper[4751]: E0131 14:42:33.186200 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:33Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.191030 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.191108 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.191126 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.191152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.191169 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:33Z","lastTransitionTime":"2026-01-31T14:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:33 crc kubenswrapper[4751]: E0131 14:42:33.210771 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:33Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.217054 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.217173 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.217191 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.217216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.217233 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:33Z","lastTransitionTime":"2026-01-31T14:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:33 crc kubenswrapper[4751]: E0131 14:42:33.240911 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:33Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.245678 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.245739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.245757 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.245782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.245801 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:33Z","lastTransitionTime":"2026-01-31T14:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:33 crc kubenswrapper[4751]: E0131 14:42:33.264855 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:33Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:33Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:33 crc kubenswrapper[4751]: E0131 14:42:33.265574 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.267872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.267972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.268009 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.268058 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.268120 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:33Z","lastTransitionTime":"2026-01-31T14:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.370931 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.370997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.371042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.371174 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.371251 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:33Z","lastTransitionTime":"2026-01-31T14:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.387601 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 13:32:23.826284927 +0000 UTC Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.404919 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.404991 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.404944 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:33 crc kubenswrapper[4751]: E0131 14:42:33.405134 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.405205 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:33 crc kubenswrapper[4751]: E0131 14:42:33.405348 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:33 crc kubenswrapper[4751]: E0131 14:42:33.405539 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:33 crc kubenswrapper[4751]: E0131 14:42:33.405681 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.474235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.474295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.474312 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.474354 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.474371 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:33Z","lastTransitionTime":"2026-01-31T14:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.577407 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.577466 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.577484 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.577508 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.577525 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:33Z","lastTransitionTime":"2026-01-31T14:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.681328 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.681389 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.681406 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.681430 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.681447 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:33Z","lastTransitionTime":"2026-01-31T14:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.784922 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.785112 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.785144 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.785173 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.785193 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:33Z","lastTransitionTime":"2026-01-31T14:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.887470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.887525 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.887541 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.887563 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.887580 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:33Z","lastTransitionTime":"2026-01-31T14:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.990127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.990194 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.990214 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.990239 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:33 crc kubenswrapper[4751]: I0131 14:42:33.990256 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:33Z","lastTransitionTime":"2026-01-31T14:42:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.093255 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.093318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.093356 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.093383 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.093400 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:34Z","lastTransitionTime":"2026-01-31T14:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.196122 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.196185 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.196208 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.196249 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.196273 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:34Z","lastTransitionTime":"2026-01-31T14:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.299728 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.299792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.299831 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.299856 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.299874 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:34Z","lastTransitionTime":"2026-01-31T14:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.387970 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-12 21:31:49.235268701 +0000 UTC Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.402557 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.402644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.402666 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.402699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.402727 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:34Z","lastTransitionTime":"2026-01-31T14:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.506260 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.506334 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.506356 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.506384 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.506406 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:34Z","lastTransitionTime":"2026-01-31T14:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.610183 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.610248 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.610264 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.610288 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.610307 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:34Z","lastTransitionTime":"2026-01-31T14:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.714247 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.714313 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.714331 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.714361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.714379 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:34Z","lastTransitionTime":"2026-01-31T14:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.817534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.817588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.817605 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.817627 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.817675 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:34Z","lastTransitionTime":"2026-01-31T14:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.921422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.921473 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.921489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.921512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:34 crc kubenswrapper[4751]: I0131 14:42:34.921553 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:34Z","lastTransitionTime":"2026-01-31T14:42:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.024171 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.024271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.024288 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.024312 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.024330 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:35Z","lastTransitionTime":"2026-01-31T14:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.127399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.127474 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.127497 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.127526 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.127548 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:35Z","lastTransitionTime":"2026-01-31T14:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.231023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.231122 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.231145 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.231172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.231195 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:35Z","lastTransitionTime":"2026-01-31T14:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.334523 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.334572 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.334589 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.334611 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.334628 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:35Z","lastTransitionTime":"2026-01-31T14:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.388573 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 12:06:32.382209489 +0000 UTC Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.404852 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.404923 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.404873 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:35 crc kubenswrapper[4751]: E0131 14:42:35.405134 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:35 crc kubenswrapper[4751]: E0131 14:42:35.405237 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.405299 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:35 crc kubenswrapper[4751]: E0131 14:42:35.405483 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:35 crc kubenswrapper[4751]: E0131 14:42:35.405642 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.439961 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.440020 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.440038 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.440063 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.440122 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:35Z","lastTransitionTime":"2026-01-31T14:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.543269 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.543336 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.543352 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.543378 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.543399 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:35Z","lastTransitionTime":"2026-01-31T14:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.646335 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.646385 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.646402 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.646423 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.646440 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:35Z","lastTransitionTime":"2026-01-31T14:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.749727 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.749789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.749807 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.749829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.749850 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:35Z","lastTransitionTime":"2026-01-31T14:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.854209 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.854289 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.854312 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.854343 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.854366 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:35Z","lastTransitionTime":"2026-01-31T14:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.958262 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.958325 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.958347 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.958378 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:35 crc kubenswrapper[4751]: I0131 14:42:35.958401 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:35Z","lastTransitionTime":"2026-01-31T14:42:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.061920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.062026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.062051 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.062131 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.062154 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:36Z","lastTransitionTime":"2026-01-31T14:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.165581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.165644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.165661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.165683 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.165702 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:36Z","lastTransitionTime":"2026-01-31T14:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.269308 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.269398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.269411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.269425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.269435 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:36Z","lastTransitionTime":"2026-01-31T14:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.379809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.379934 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.380012 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.380046 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.380135 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:36Z","lastTransitionTime":"2026-01-31T14:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.389298 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 19:20:13.851452218 +0000 UTC Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.427701 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.442866 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.460131 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.482131 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.482187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.482205 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.482227 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.482244 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:36Z","lastTransitionTime":"2026-01-31T14:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.482781 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.497750 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.512051 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.524422 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.536147 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.551227 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.563211 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.584826 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.584895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.584915 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.584941 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.584961 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:36Z","lastTransitionTime":"2026-01-31T14:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.585049 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:29Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 14:42:29.388826 6425 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:42:29.388860 6425 services_controller.go:444] Built service openshift-etcd/etcd LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:42:29.388874 6425 services_controller.go:445] Built service openshift-etcd/etcd LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:42:29.388002 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.600708 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.612296 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64468352-f9fe-48bb-b204-b9f828c06bf8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56c1e31014f9e3d0be8140f58cff1c752ad4be1c6c60a942bc18320bbd37b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a7c5739a571e6f3ec88c3798ad2604382b9320c44ddda3d41681a64c6ab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a6478c4477b785bcb405d597f1c835faaf4ef7adb3a2bcd6e70cc2e692f44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.625332 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.639838 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.651632 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.661982 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:36Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.688224 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.688536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.688729 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.688924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.689387 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:36Z","lastTransitionTime":"2026-01-31T14:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.792588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.792656 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.792676 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.792703 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.792723 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:36Z","lastTransitionTime":"2026-01-31T14:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.895829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.895903 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.895923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.895949 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.895967 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:36Z","lastTransitionTime":"2026-01-31T14:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.999218 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.999688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:36 crc kubenswrapper[4751]: I0131 14:42:36.999707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:36.999732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:36.999749 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:36Z","lastTransitionTime":"2026-01-31T14:42:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.102620 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.102818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.102880 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.102911 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.102929 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:37Z","lastTransitionTime":"2026-01-31T14:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.206099 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.206191 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.206241 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.206267 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.206284 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:37Z","lastTransitionTime":"2026-01-31T14:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.310846 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.310901 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.310918 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.310941 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.310959 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:37Z","lastTransitionTime":"2026-01-31T14:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.389801 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 03:48:58.20389202 +0000 UTC Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.405501 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.405562 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:37 crc kubenswrapper[4751]: E0131 14:42:37.405675 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.405510 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:37 crc kubenswrapper[4751]: E0131 14:42:37.405824 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.405885 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:37 crc kubenswrapper[4751]: E0131 14:42:37.405965 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:37 crc kubenswrapper[4751]: E0131 14:42:37.406039 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.414821 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.414889 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.414907 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.414960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.414986 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:37Z","lastTransitionTime":"2026-01-31T14:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.518598 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.518669 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.518692 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.518725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.518748 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:37Z","lastTransitionTime":"2026-01-31T14:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.621597 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.621659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.621681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.621709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.621730 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:37Z","lastTransitionTime":"2026-01-31T14:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.723623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.723668 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.723683 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.723705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.723720 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:37Z","lastTransitionTime":"2026-01-31T14:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.826581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.826644 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.826662 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.826688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.826706 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:37Z","lastTransitionTime":"2026-01-31T14:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.929426 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.929485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.929502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.929528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:37 crc kubenswrapper[4751]: I0131 14:42:37.929547 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:37Z","lastTransitionTime":"2026-01-31T14:42:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.032646 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.032698 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.032715 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.032739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.032757 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:38Z","lastTransitionTime":"2026-01-31T14:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.136128 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.136214 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.136232 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.136258 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.136276 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:38Z","lastTransitionTime":"2026-01-31T14:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.239225 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.239282 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.239301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.239325 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.239342 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:38Z","lastTransitionTime":"2026-01-31T14:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.341626 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.341695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.341715 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.341740 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.341756 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:38Z","lastTransitionTime":"2026-01-31T14:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.390431 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 13:49:02.735074908 +0000 UTC Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.444252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.444315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.444332 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.444354 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.444374 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:38Z","lastTransitionTime":"2026-01-31T14:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.546976 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.547048 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.547098 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.547124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.547141 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:38Z","lastTransitionTime":"2026-01-31T14:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.649950 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.650001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.650017 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.650039 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.650055 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:38Z","lastTransitionTime":"2026-01-31T14:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.753679 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.753756 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.753782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.753813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.753837 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:38Z","lastTransitionTime":"2026-01-31T14:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.856401 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.856458 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.856475 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.856498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.856516 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:38Z","lastTransitionTime":"2026-01-31T14:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.959659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.959731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.959752 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.959778 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:38 crc kubenswrapper[4751]: I0131 14:42:38.959796 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:38Z","lastTransitionTime":"2026-01-31T14:42:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.062647 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.062703 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.062719 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.062748 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.062768 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:39Z","lastTransitionTime":"2026-01-31T14:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.165071 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.165148 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.165165 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.165187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.165204 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:39Z","lastTransitionTime":"2026-01-31T14:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.268111 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.268169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.268186 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.268212 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.268229 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:39Z","lastTransitionTime":"2026-01-31T14:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.371358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.371416 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.371432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.371456 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.371473 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:39Z","lastTransitionTime":"2026-01-31T14:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.390909 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-02 20:02:01.20547735 +0000 UTC Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.404773 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:39 crc kubenswrapper[4751]: E0131 14:42:39.404882 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.405060 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:39 crc kubenswrapper[4751]: E0131 14:42:39.405157 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.405313 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:39 crc kubenswrapper[4751]: E0131 14:42:39.405385 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.405711 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:39 crc kubenswrapper[4751]: E0131 14:42:39.405782 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.474021 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.474105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.474119 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.474136 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.474149 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:39Z","lastTransitionTime":"2026-01-31T14:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.576934 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.576965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.576977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.576993 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.577004 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:39Z","lastTransitionTime":"2026-01-31T14:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.680264 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.680299 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.680310 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.680328 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.680340 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:39Z","lastTransitionTime":"2026-01-31T14:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.783095 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.783180 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.783198 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.783220 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.783239 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:39Z","lastTransitionTime":"2026-01-31T14:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.885553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.885598 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.885614 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.885635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.885654 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:39Z","lastTransitionTime":"2026-01-31T14:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.988295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.988358 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.988374 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.988396 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:39 crc kubenswrapper[4751]: I0131 14:42:39.988415 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:39Z","lastTransitionTime":"2026-01-31T14:42:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.091132 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.091201 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.091217 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.091240 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.091256 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:40Z","lastTransitionTime":"2026-01-31T14:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.193920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.193958 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.193968 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.193981 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.193989 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:40Z","lastTransitionTime":"2026-01-31T14:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.296436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.296493 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.296512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.296534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.296551 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:40Z","lastTransitionTime":"2026-01-31T14:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.391933 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 00:37:34.192427962 +0000 UTC Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.399377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.399439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.399457 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.399483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.399500 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:40Z","lastTransitionTime":"2026-01-31T14:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.502444 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.502480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.502487 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.502500 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.502509 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:40Z","lastTransitionTime":"2026-01-31T14:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.606524 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.606580 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.606597 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.606620 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.606638 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:40Z","lastTransitionTime":"2026-01-31T14:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.709129 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.709174 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.709185 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.709202 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.709215 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:40Z","lastTransitionTime":"2026-01-31T14:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.811821 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.811862 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.811873 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.811890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.811900 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:40Z","lastTransitionTime":"2026-01-31T14:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.914548 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.914620 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.914642 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.914673 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:40 crc kubenswrapper[4751]: I0131 14:42:40.914696 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:40Z","lastTransitionTime":"2026-01-31T14:42:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.017520 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.017623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.017640 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.017660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.017674 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:41Z","lastTransitionTime":"2026-01-31T14:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.120852 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.120903 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.120918 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.120941 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.120955 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:41Z","lastTransitionTime":"2026-01-31T14:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.222865 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.222915 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.222945 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.222961 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.222970 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:41Z","lastTransitionTime":"2026-01-31T14:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.324961 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.325017 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.325031 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.325050 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.325064 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:41Z","lastTransitionTime":"2026-01-31T14:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.393011 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 23:11:46.210802164 +0000 UTC Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.405305 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.405370 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.405372 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:41 crc kubenswrapper[4751]: E0131 14:42:41.405440 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.405326 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:41 crc kubenswrapper[4751]: E0131 14:42:41.405625 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:41 crc kubenswrapper[4751]: E0131 14:42:41.405638 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:41 crc kubenswrapper[4751]: E0131 14:42:41.405677 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.427689 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.427773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.427798 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.427829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.427851 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:41Z","lastTransitionTime":"2026-01-31T14:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.530436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.530485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.530495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.530509 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.530517 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:41Z","lastTransitionTime":"2026-01-31T14:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.633974 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.634033 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.634045 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.634061 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.634105 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:41Z","lastTransitionTime":"2026-01-31T14:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.737863 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.737925 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.737935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.737958 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.737969 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:41Z","lastTransitionTime":"2026-01-31T14:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.867861 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.867935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.867954 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.867983 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.868003 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:41Z","lastTransitionTime":"2026-01-31T14:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.970806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.971055 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.971112 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.971126 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:41 crc kubenswrapper[4751]: I0131 14:42:41.971136 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:41Z","lastTransitionTime":"2026-01-31T14:42:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.074557 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.074626 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.074645 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.074670 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.074689 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:42Z","lastTransitionTime":"2026-01-31T14:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.177844 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.177888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.177896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.177912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.177922 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:42Z","lastTransitionTime":"2026-01-31T14:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.280542 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.280601 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.280613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.280632 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.280645 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:42Z","lastTransitionTime":"2026-01-31T14:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.383028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.383096 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.383113 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.383136 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.383152 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:42Z","lastTransitionTime":"2026-01-31T14:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.393395 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 16:49:51.274816437 +0000 UTC Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.485247 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.485288 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.485301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.485318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.485331 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:42Z","lastTransitionTime":"2026-01-31T14:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.587405 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.587444 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.587453 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.587470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.587484 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:42Z","lastTransitionTime":"2026-01-31T14:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.690281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.690342 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.690361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.690385 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.690404 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:42Z","lastTransitionTime":"2026-01-31T14:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.792816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.792861 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.792873 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.792890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.792902 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:42Z","lastTransitionTime":"2026-01-31T14:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.894845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.894908 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.894926 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.895164 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.895188 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:42Z","lastTransitionTime":"2026-01-31T14:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.996890 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.996946 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.996959 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.996979 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:42 crc kubenswrapper[4751]: I0131 14:42:42.996992 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:42Z","lastTransitionTime":"2026-01-31T14:42:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.099124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.099174 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.099184 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.099201 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.099215 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:43Z","lastTransitionTime":"2026-01-31T14:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.202139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.202170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.202181 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.202195 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.202206 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:43Z","lastTransitionTime":"2026-01-31T14:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.305504 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.305553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.305562 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.305577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.305587 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:43Z","lastTransitionTime":"2026-01-31T14:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.394529 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 22:19:40.746100594 +0000 UTC Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.405979 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.406105 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.406135 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.406220 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:43 crc kubenswrapper[4751]: E0131 14:42:43.406355 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:43 crc kubenswrapper[4751]: E0131 14:42:43.406593 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:43 crc kubenswrapper[4751]: E0131 14:42:43.406687 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:43 crc kubenswrapper[4751]: E0131 14:42:43.406798 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.407594 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.407627 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.407638 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.407652 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.407664 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:43Z","lastTransitionTime":"2026-01-31T14:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.510723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.510792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.510810 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.510835 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.510857 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:43Z","lastTransitionTime":"2026-01-31T14:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.613379 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.613629 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.613759 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.613859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.613939 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:43Z","lastTransitionTime":"2026-01-31T14:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.661442 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.661728 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.661805 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.661936 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.662025 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:43Z","lastTransitionTime":"2026-01-31T14:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:43 crc kubenswrapper[4751]: E0131 14:42:43.676619 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:43Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.681332 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.681626 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.681839 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.681957 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.682096 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:43Z","lastTransitionTime":"2026-01-31T14:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:43 crc kubenswrapper[4751]: E0131 14:42:43.696262 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:43Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.701027 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.701094 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.701104 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.701119 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.701129 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:43Z","lastTransitionTime":"2026-01-31T14:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:43 crc kubenswrapper[4751]: E0131 14:42:43.714129 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:43Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.718827 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.718874 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.718886 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.718906 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.718917 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:43Z","lastTransitionTime":"2026-01-31T14:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:43 crc kubenswrapper[4751]: E0131 14:42:43.735009 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:43Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.739753 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.739801 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.739811 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.739829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.739845 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:43Z","lastTransitionTime":"2026-01-31T14:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:43 crc kubenswrapper[4751]: E0131 14:42:43.757918 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:43Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:43Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:43 crc kubenswrapper[4751]: E0131 14:42:43.758111 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.760541 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.760590 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.760600 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.760615 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.760632 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:43Z","lastTransitionTime":"2026-01-31T14:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.863543 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.863587 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.863598 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.863613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.863627 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:43Z","lastTransitionTime":"2026-01-31T14:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.966623 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.966691 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.966714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.966747 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:43 crc kubenswrapper[4751]: I0131 14:42:43.966769 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:43Z","lastTransitionTime":"2026-01-31T14:42:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.069351 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.069388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.069399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.069416 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.069426 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:44Z","lastTransitionTime":"2026-01-31T14:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.174185 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.174231 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.174243 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.174260 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.174271 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:44Z","lastTransitionTime":"2026-01-31T14:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.277961 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.278029 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.278048 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.278102 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.278122 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:44Z","lastTransitionTime":"2026-01-31T14:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.380266 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.380492 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.380632 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.380753 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.380838 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:44Z","lastTransitionTime":"2026-01-31T14:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.395018 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 01:45:32.087808831 +0000 UTC Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.483808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.484152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.484245 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.484325 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.484397 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:44Z","lastTransitionTime":"2026-01-31T14:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.586645 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.586902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.587000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.587142 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.587229 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:44Z","lastTransitionTime":"2026-01-31T14:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.690294 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.690432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.690460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.690498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.690524 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:44Z","lastTransitionTime":"2026-01-31T14:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.794643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.794822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.794896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.794988 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.795067 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:44Z","lastTransitionTime":"2026-01-31T14:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.899220 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.899569 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.899591 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.899611 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:44 crc kubenswrapper[4751]: I0131 14:42:44.899758 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:44Z","lastTransitionTime":"2026-01-31T14:42:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.004649 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.004694 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.004703 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.004720 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.004729 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:45Z","lastTransitionTime":"2026-01-31T14:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.108593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.108650 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.108663 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.108684 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.108697 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:45Z","lastTransitionTime":"2026-01-31T14:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.212914 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.212985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.213007 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.213029 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.213044 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:45Z","lastTransitionTime":"2026-01-31T14:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.316484 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.316548 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.316573 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.316629 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.316652 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:45Z","lastTransitionTime":"2026-01-31T14:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.396593 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 16:02:26.520514282 +0000 UTC Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.405029 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.405148 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.405148 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.405301 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:45 crc kubenswrapper[4751]: E0131 14:42:45.405554 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:45 crc kubenswrapper[4751]: E0131 14:42:45.405704 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:45 crc kubenswrapper[4751]: E0131 14:42:45.405830 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:45 crc kubenswrapper[4751]: E0131 14:42:45.405955 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.420460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.420524 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.420539 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.420563 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.420579 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:45Z","lastTransitionTime":"2026-01-31T14:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.524333 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.524398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.524412 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.524434 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.524448 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:45Z","lastTransitionTime":"2026-01-31T14:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.626937 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.626991 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.627004 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.627027 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.627040 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:45Z","lastTransitionTime":"2026-01-31T14:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.729691 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.729751 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.729770 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.729794 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.729847 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:45Z","lastTransitionTime":"2026-01-31T14:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.832992 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.833057 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.833111 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.833136 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.833154 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:45Z","lastTransitionTime":"2026-01-31T14:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.936388 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.936438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.936452 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.936472 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:45 crc kubenswrapper[4751]: I0131 14:42:45.936486 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:45Z","lastTransitionTime":"2026-01-31T14:42:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.039504 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.039568 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.039582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.039601 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.039613 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:46Z","lastTransitionTime":"2026-01-31T14:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.142197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.142252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.142263 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.142281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.142292 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:46Z","lastTransitionTime":"2026-01-31T14:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.244923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.244984 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.244994 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.245010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.245018 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:46Z","lastTransitionTime":"2026-01-31T14:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.347126 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.347170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.347182 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.347199 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.347212 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:46Z","lastTransitionTime":"2026-01-31T14:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.396787 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 07:21:47.703379668 +0000 UTC Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.406687 4751 scope.go:117] "RemoveContainer" containerID="307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e" Jan 31 14:42:46 crc kubenswrapper[4751]: E0131 14:42:46.406868 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.424758 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.441513 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.449613 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.449776 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.449858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.449943 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.450041 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:46Z","lastTransitionTime":"2026-01-31T14:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.460431 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.477287 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64468352-f9fe-48bb-b204-b9f828c06bf8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56c1e31014f9e3d0be8140f58cff1c752ad4be1c6c60a942bc18320bbd37b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a7c5739a571e6f3ec88c3798ad2604382b9320c44ddda3d41681a64c6ab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a6478c4477b785bcb405d597f1c835faaf4ef7adb3a2bcd6e70cc2e692f44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.497396 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.515613 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.533401 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.548614 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.552928 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.552962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.552973 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.552989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.553000 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:46Z","lastTransitionTime":"2026-01-31T14:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.563927 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.582699 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.595850 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.609734 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.621862 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.632642 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.645474 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.654985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.655030 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.655045 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.655091 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.655108 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:46Z","lastTransitionTime":"2026-01-31T14:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.655345 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.672280 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:29Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 14:42:29.388826 6425 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:42:29.388860 6425 services_controller.go:444] Built service openshift-etcd/etcd LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:42:29.388874 6425 services_controller.go:445] Built service openshift-etcd/etcd LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:42:29.388002 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:46Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.757451 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.757485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.757494 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.757510 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.757519 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:46Z","lastTransitionTime":"2026-01-31T14:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.859867 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.859904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.859913 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.859927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.859937 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:46Z","lastTransitionTime":"2026-01-31T14:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.962820 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.962866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.962877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.962895 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:46 crc kubenswrapper[4751]: I0131 14:42:46.962905 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:46Z","lastTransitionTime":"2026-01-31T14:42:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.065319 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.065560 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.065573 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.065593 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.065605 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:47Z","lastTransitionTime":"2026-01-31T14:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.167971 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.168280 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.168340 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.168396 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.168461 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:47Z","lastTransitionTime":"2026-01-31T14:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.270702 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.270959 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.271041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.271146 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.271215 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:47Z","lastTransitionTime":"2026-01-31T14:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.373879 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.373938 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.373956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.373982 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.374002 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:47Z","lastTransitionTime":"2026-01-31T14:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.397486 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 18:46:06.237489149 +0000 UTC Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.405907 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:47 crc kubenswrapper[4751]: E0131 14:42:47.406139 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.406176 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.406240 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.406188 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:47 crc kubenswrapper[4751]: E0131 14:42:47.406317 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:47 crc kubenswrapper[4751]: E0131 14:42:47.406695 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:47 crc kubenswrapper[4751]: E0131 14:42:47.406551 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.476821 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.477136 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.477209 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.477279 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.477359 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:47Z","lastTransitionTime":"2026-01-31T14:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.580199 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.580248 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.580265 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.580292 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.580310 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:47Z","lastTransitionTime":"2026-01-31T14:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.682843 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.682888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.682901 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.682918 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.682931 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:47Z","lastTransitionTime":"2026-01-31T14:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.785100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.785491 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.785574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.785661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.785729 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:47Z","lastTransitionTime":"2026-01-31T14:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.888629 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.888682 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.888696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.888743 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.888760 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:47Z","lastTransitionTime":"2026-01-31T14:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.895806 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs\") pod \"network-metrics-daemon-xtn6l\" (UID: \"68aeb9c7-d3c3-4c34-96ab-bb947421c504\") " pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:47 crc kubenswrapper[4751]: E0131 14:42:47.895950 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:42:47 crc kubenswrapper[4751]: E0131 14:42:47.896014 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs podName:68aeb9c7-d3c3-4c34-96ab-bb947421c504 nodeName:}" failed. No retries permitted until 2026-01-31 14:43:19.895991593 +0000 UTC m=+104.270704488 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs") pod "network-metrics-daemon-xtn6l" (UID: "68aeb9c7-d3c3-4c34-96ab-bb947421c504") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.993555 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.993659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.993680 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.993709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:47 crc kubenswrapper[4751]: I0131 14:42:47.993732 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:47Z","lastTransitionTime":"2026-01-31T14:42:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.097154 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.097193 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.097203 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.097219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.097229 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:48Z","lastTransitionTime":"2026-01-31T14:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.201264 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.201323 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.201342 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.201366 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.201384 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:48Z","lastTransitionTime":"2026-01-31T14:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.303805 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.303854 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.303869 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.303891 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.303908 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:48Z","lastTransitionTime":"2026-01-31T14:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.398684 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 19:40:11.34768546 +0000 UTC Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.406513 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.406555 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.406567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.406586 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.406598 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:48Z","lastTransitionTime":"2026-01-31T14:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.508221 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.508496 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.508628 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.508723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.508817 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:48Z","lastTransitionTime":"2026-01-31T14:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.610452 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.610506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.610522 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.610545 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.610564 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:48Z","lastTransitionTime":"2026-01-31T14:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.712377 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.712414 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.712425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.712441 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.712452 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:48Z","lastTransitionTime":"2026-01-31T14:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.814892 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.815157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.815235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.815301 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.815366 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:48Z","lastTransitionTime":"2026-01-31T14:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.918312 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.918355 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.918371 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.918392 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:48 crc kubenswrapper[4751]: I0131 14:42:48.918412 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:48Z","lastTransitionTime":"2026-01-31T14:42:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.021243 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.021319 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.021339 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.021369 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.021395 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:49Z","lastTransitionTime":"2026-01-31T14:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.123828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.125788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.126008 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.126208 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.126624 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:49Z","lastTransitionTime":"2026-01-31T14:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.230214 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.230248 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.230257 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.230271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.230280 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:49Z","lastTransitionTime":"2026-01-31T14:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.333107 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.333136 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.333146 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.333160 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.333169 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:49Z","lastTransitionTime":"2026-01-31T14:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.399716 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 02:03:32.583360852 +0000 UTC Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.405193 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.405229 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.405340 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.405337 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:49 crc kubenswrapper[4751]: E0131 14:42:49.405471 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:49 crc kubenswrapper[4751]: E0131 14:42:49.405545 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:49 crc kubenswrapper[4751]: E0131 14:42:49.405626 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:49 crc kubenswrapper[4751]: E0131 14:42:49.405765 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.435419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.435471 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.435482 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.435508 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.435518 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:49Z","lastTransitionTime":"2026-01-31T14:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.538350 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.538387 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.538397 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.538412 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.538421 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:49Z","lastTransitionTime":"2026-01-31T14:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.642821 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.642848 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.642856 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.642869 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.642878 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:49Z","lastTransitionTime":"2026-01-31T14:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.745040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.745086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.745095 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.745106 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.745116 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:49Z","lastTransitionTime":"2026-01-31T14:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.847908 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.847975 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.847997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.848028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.848052 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:49Z","lastTransitionTime":"2026-01-31T14:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.898161 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rtthp_e7dd989b-33df-4562-a60b-f273428fea3d/kube-multus/0.log" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.898213 4751 generic.go:334] "Generic (PLEG): container finished" podID="e7dd989b-33df-4562-a60b-f273428fea3d" containerID="7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608" exitCode=1 Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.898245 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rtthp" event={"ID":"e7dd989b-33df-4562-a60b-f273428fea3d","Type":"ContainerDied","Data":"7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608"} Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.898644 4751 scope.go:117] "RemoveContainer" containerID="7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.925252 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.939246 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.950451 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.950486 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.950495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.950510 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.950521 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:49Z","lastTransitionTime":"2026-01-31T14:42:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.971608 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:29Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 14:42:29.388826 6425 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:42:29.388860 6425 services_controller.go:444] Built service openshift-etcd/etcd LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:42:29.388874 6425 services_controller.go:445] Built service openshift-etcd/etcd LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:42:29.388002 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:49 crc kubenswrapper[4751]: I0131 14:42:49.983857 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.000297 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:49Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.014924 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.032355 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.045098 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64468352-f9fe-48bb-b204-b9f828c06bf8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56c1e31014f9e3d0be8140f58cff1c752ad4be1c6c60a942bc18320bbd37b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a7c5739a571e6f3ec88c3798ad2604382b9320c44ddda3d41681a64c6ab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a6478c4477b785bcb405d597f1c835faaf4ef7adb3a2bcd6e70cc2e692f44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.053834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.053936 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.053997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.054090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.054181 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:50Z","lastTransitionTime":"2026-01-31T14:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.059718 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.073852 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.084814 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.102562 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:49Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:49Z\\\",\\\"message\\\":\\\"2026-01-31T14:42:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508\\\\n2026-01-31T14:42:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508 to /host/opt/cni/bin/\\\\n2026-01-31T14:42:04Z [verbose] multus-daemon started\\\\n2026-01-31T14:42:04Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:42:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.123527 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.135936 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.149365 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.157876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.157922 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.157940 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.157966 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.157983 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:50Z","lastTransitionTime":"2026-01-31T14:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.161305 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.174288 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.261423 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.261470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.261479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.261494 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.261505 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:50Z","lastTransitionTime":"2026-01-31T14:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.364198 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.364266 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.364289 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.364319 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.364341 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:50Z","lastTransitionTime":"2026-01-31T14:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.400004 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 21:47:35.620530663 +0000 UTC Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.467496 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.467539 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.467548 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.467568 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.467581 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:50Z","lastTransitionTime":"2026-01-31T14:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.570169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.570528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.570618 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.570699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.570788 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:50Z","lastTransitionTime":"2026-01-31T14:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.673611 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.673912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.674090 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.674233 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.674353 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:50Z","lastTransitionTime":"2026-01-31T14:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.782197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.782239 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.782249 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.782266 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.782277 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:50Z","lastTransitionTime":"2026-01-31T14:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.885329 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.885634 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.885808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.885952 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.886135 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:50Z","lastTransitionTime":"2026-01-31T14:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.905994 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rtthp_e7dd989b-33df-4562-a60b-f273428fea3d/kube-multus/0.log" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.906280 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rtthp" event={"ID":"e7dd989b-33df-4562-a60b-f273428fea3d","Type":"ContainerStarted","Data":"2bdcbdac0cc4b17e027947c041a0ee4a7d7f549aa6dbe5c07c370ca7c0c50475"} Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.928320 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.945293 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.958230 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.974629 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bdcbdac0cc4b17e027947c041a0ee4a7d7f549aa6dbe5c07c370ca7c0c50475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:49Z\\\",\\\"message\\\":\\\"2026-01-31T14:42:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508\\\\n2026-01-31T14:42:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508 to /host/opt/cni/bin/\\\\n2026-01-31T14:42:04Z [verbose] multus-daemon started\\\\n2026-01-31T14:42:04Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:42:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.989318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.989360 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.989376 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.989399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.989416 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:50Z","lastTransitionTime":"2026-01-31T14:42:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:50 crc kubenswrapper[4751]: I0131 14:42:50.995612 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:50Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.012247 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.025501 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.038690 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.057708 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.070668 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.091617 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.091681 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.091698 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.091722 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.091740 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:51Z","lastTransitionTime":"2026-01-31T14:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.093959 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:29Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 14:42:29.388826 6425 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:42:29.388860 6425 services_controller.go:444] Built service openshift-etcd/etcd LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:42:29.388874 6425 services_controller.go:445] Built service openshift-etcd/etcd LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:42:29.388002 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.108426 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.123722 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.137803 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.153988 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.172107 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.186147 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64468352-f9fe-48bb-b204-b9f828c06bf8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56c1e31014f9e3d0be8140f58cff1c752ad4be1c6c60a942bc18320bbd37b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a7c5739a571e6f3ec88c3798ad2604382b9320c44ddda3d41681a64c6ab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a6478c4477b785bcb405d597f1c835faaf4ef7adb3a2bcd6e70cc2e692f44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:51Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.193783 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.193839 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.193858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.193882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.193900 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:51Z","lastTransitionTime":"2026-01-31T14:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.296466 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.296517 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.296533 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.296554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.296571 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:51Z","lastTransitionTime":"2026-01-31T14:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.399471 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.399518 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.399531 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.399553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.399570 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:51Z","lastTransitionTime":"2026-01-31T14:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.400828 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 07:05:42.661210316 +0000 UTC Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.405208 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.405274 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.405230 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.405222 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:51 crc kubenswrapper[4751]: E0131 14:42:51.405367 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:51 crc kubenswrapper[4751]: E0131 14:42:51.405520 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:51 crc kubenswrapper[4751]: E0131 14:42:51.405620 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:51 crc kubenswrapper[4751]: E0131 14:42:51.405691 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.502419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.502468 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.502480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.502499 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.502513 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:51Z","lastTransitionTime":"2026-01-31T14:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.604419 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.604450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.604459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.604476 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.604487 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:51Z","lastTransitionTime":"2026-01-31T14:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.706949 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.706981 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.706989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.707003 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.707012 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:51Z","lastTransitionTime":"2026-01-31T14:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.810105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.810141 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.810151 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.810163 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.810172 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:51Z","lastTransitionTime":"2026-01-31T14:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.912706 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.912779 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.912802 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.912830 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:51 crc kubenswrapper[4751]: I0131 14:42:51.912850 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:51Z","lastTransitionTime":"2026-01-31T14:42:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.015444 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.015494 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.015512 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.015536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.015555 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:52Z","lastTransitionTime":"2026-01-31T14:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.118975 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.119015 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.119026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.119040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.119051 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:52Z","lastTransitionTime":"2026-01-31T14:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.221354 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.221428 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.221451 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.221479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.221503 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:52Z","lastTransitionTime":"2026-01-31T14:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.324261 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.324319 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.324337 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.324364 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.324384 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:52Z","lastTransitionTime":"2026-01-31T14:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.401554 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:44:02.046084211 +0000 UTC Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.426841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.426889 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.426899 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.426913 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.426922 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:52Z","lastTransitionTime":"2026-01-31T14:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.529993 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.530403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.530452 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.530475 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.530491 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:52Z","lastTransitionTime":"2026-01-31T14:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.633695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.633735 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.633743 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.633759 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.633774 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:52Z","lastTransitionTime":"2026-01-31T14:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.737044 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.737133 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.737150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.737172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.737189 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:52Z","lastTransitionTime":"2026-01-31T14:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.839586 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.839911 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.839923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.839935 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.839946 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:52Z","lastTransitionTime":"2026-01-31T14:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.941978 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.942054 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.942115 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.942149 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:52 crc kubenswrapper[4751]: I0131 14:42:52.942177 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:52Z","lastTransitionTime":"2026-01-31T14:42:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.045268 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.045321 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.045337 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.045361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.045383 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:53Z","lastTransitionTime":"2026-01-31T14:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.147884 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.147943 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.147965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.147989 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.148006 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:53Z","lastTransitionTime":"2026-01-31T14:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.251195 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.251252 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.251270 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.251300 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.251318 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:53Z","lastTransitionTime":"2026-01-31T14:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.353446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.353520 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.353536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.353565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.353589 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:53Z","lastTransitionTime":"2026-01-31T14:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.402498 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 12:54:59.610272862 +0000 UTC Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.405787 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.405907 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:53 crc kubenswrapper[4751]: E0131 14:42:53.406041 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.406119 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.406156 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:53 crc kubenswrapper[4751]: E0131 14:42:53.406281 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:53 crc kubenswrapper[4751]: E0131 14:42:53.406313 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:53 crc kubenswrapper[4751]: E0131 14:42:53.406455 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.456582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.456634 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.456646 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.456663 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.456675 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:53Z","lastTransitionTime":"2026-01-31T14:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.559042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.559129 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.559151 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.559180 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.559201 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:53Z","lastTransitionTime":"2026-01-31T14:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.661241 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.661287 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.661299 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.661315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.661326 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:53Z","lastTransitionTime":"2026-01-31T14:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.764490 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.764587 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.764609 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.764639 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.764659 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:53Z","lastTransitionTime":"2026-01-31T14:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.866602 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.866649 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.866660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.866677 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.866690 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:53Z","lastTransitionTime":"2026-01-31T14:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.915041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.915100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.915110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.915126 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.915136 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:53Z","lastTransitionTime":"2026-01-31T14:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:53 crc kubenswrapper[4751]: E0131 14:42:53.934133 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:53Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.939670 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.939736 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.939748 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.939768 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.939802 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:53Z","lastTransitionTime":"2026-01-31T14:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:53 crc kubenswrapper[4751]: E0131 14:42:53.960030 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:53Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.965986 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.966110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.966130 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.966188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.966205 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:53Z","lastTransitionTime":"2026-01-31T14:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:53 crc kubenswrapper[4751]: E0131 14:42:53.987913 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:53Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.994654 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.994708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.994719 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.994739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:53 crc kubenswrapper[4751]: I0131 14:42:53.994752 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:53Z","lastTransitionTime":"2026-01-31T14:42:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:54 crc kubenswrapper[4751]: E0131 14:42:54.010146 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:53Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:54Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.014724 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.014841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.014903 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.014929 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.014946 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:54Z","lastTransitionTime":"2026-01-31T14:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:54 crc kubenswrapper[4751]: E0131 14:42:54.030013 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:54Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:54Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:42:54Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:54Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:54Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:54 crc kubenswrapper[4751]: E0131 14:42:54.030213 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.032703 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.032742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.032771 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.032789 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.032798 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:54Z","lastTransitionTime":"2026-01-31T14:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.135567 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.135660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.135678 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.135737 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.135756 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:54Z","lastTransitionTime":"2026-01-31T14:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.239714 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.239773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.239791 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.239814 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.239831 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:54Z","lastTransitionTime":"2026-01-31T14:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.343647 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.343712 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.343764 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.343794 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.343818 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:54Z","lastTransitionTime":"2026-01-31T14:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.402686 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 02:11:33.052218152 +0000 UTC Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.446181 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.446265 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.446313 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.446335 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.446352 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:54Z","lastTransitionTime":"2026-01-31T14:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.548791 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.548884 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.548894 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.548929 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.548942 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:54Z","lastTransitionTime":"2026-01-31T14:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.651619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.651659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.651674 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.651693 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.651708 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:54Z","lastTransitionTime":"2026-01-31T14:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.754984 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.755052 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.755112 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.755677 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.755738 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:54Z","lastTransitionTime":"2026-01-31T14:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.858671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.858709 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.858717 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.858731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.858742 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:54Z","lastTransitionTime":"2026-01-31T14:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.962116 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.962189 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.962210 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.962235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:54 crc kubenswrapper[4751]: I0131 14:42:54.962252 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:54Z","lastTransitionTime":"2026-01-31T14:42:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.065033 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.065131 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.065148 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.065172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.065190 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:55Z","lastTransitionTime":"2026-01-31T14:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.167876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.167925 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.167943 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.167999 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.168016 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:55Z","lastTransitionTime":"2026-01-31T14:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.270992 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.271363 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.271520 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.271656 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.271782 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:55Z","lastTransitionTime":"2026-01-31T14:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.374540 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.374606 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.374627 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.374649 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.374667 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:55Z","lastTransitionTime":"2026-01-31T14:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.403227 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 15:51:35.202500685 +0000 UTC Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.405682 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.405716 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.405761 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.405705 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:55 crc kubenswrapper[4751]: E0131 14:42:55.405916 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:55 crc kubenswrapper[4751]: E0131 14:42:55.406318 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:55 crc kubenswrapper[4751]: E0131 14:42:55.406484 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:55 crc kubenswrapper[4751]: E0131 14:42:55.406570 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.477347 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.477429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.477447 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.477471 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.477488 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:55Z","lastTransitionTime":"2026-01-31T14:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.579876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.579942 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.579964 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.579991 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.580012 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:55Z","lastTransitionTime":"2026-01-31T14:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.681965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.682029 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.682050 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.682112 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.682136 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:55Z","lastTransitionTime":"2026-01-31T14:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.785126 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.785175 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.785187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.785204 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.785217 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:55Z","lastTransitionTime":"2026-01-31T14:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.887857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.887899 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.887911 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.887927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.887939 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:55Z","lastTransitionTime":"2026-01-31T14:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.990609 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.990674 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.990690 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.990716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:55 crc kubenswrapper[4751]: I0131 14:42:55.990734 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:55Z","lastTransitionTime":"2026-01-31T14:42:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.093145 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.093181 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.093193 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.093207 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.093225 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:56Z","lastTransitionTime":"2026-01-31T14:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.195872 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.195922 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.195939 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.195961 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.195977 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:56Z","lastTransitionTime":"2026-01-31T14:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.298880 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.298945 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.298962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.298986 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.299005 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:56Z","lastTransitionTime":"2026-01-31T14:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.402128 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.402191 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.402213 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.402239 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.402257 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:56Z","lastTransitionTime":"2026-01-31T14:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.403572 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:42:19.648682466 +0000 UTC Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.420428 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.435469 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64468352-f9fe-48bb-b204-b9f828c06bf8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56c1e31014f9e3d0be8140f58cff1c752ad4be1c6c60a942bc18320bbd37b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a7c5739a571e6f3ec88c3798ad2604382b9320c44ddda3d41681a64c6ab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a6478c4477b785bcb405d597f1c835faaf4ef7adb3a2bcd6e70cc2e692f44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.457058 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.473139 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.492873 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.504420 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.504497 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.504523 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.504558 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.504576 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:56Z","lastTransitionTime":"2026-01-31T14:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.512558 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.532029 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bdcbdac0cc4b17e027947c041a0ee4a7d7f549aa6dbe5c07c370ca7c0c50475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:49Z\\\",\\\"message\\\":\\\"2026-01-31T14:42:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508\\\\n2026-01-31T14:42:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508 to /host/opt/cni/bin/\\\\n2026-01-31T14:42:04Z [verbose] multus-daemon started\\\\n2026-01-31T14:42:04Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:42:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.555191 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.571996 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.587478 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.603878 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.608364 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.608425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.608438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.608454 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.608465 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:56Z","lastTransitionTime":"2026-01-31T14:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.619471 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.638121 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.651495 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.683852 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:29Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 14:42:29.388826 6425 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:42:29.388860 6425 services_controller.go:444] Built service openshift-etcd/etcd LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:42:29.388874 6425 services_controller.go:445] Built service openshift-etcd/etcd LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:42:29.388002 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.705932 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.710707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.710818 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.710842 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.710874 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.710896 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:56Z","lastTransitionTime":"2026-01-31T14:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.722414 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:56Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.814040 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.814120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.814140 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.814165 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.814183 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:56Z","lastTransitionTime":"2026-01-31T14:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.917531 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.917874 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.918011 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.918321 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:56 crc kubenswrapper[4751]: I0131 14:42:56.918473 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:56Z","lastTransitionTime":"2026-01-31T14:42:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.022318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.023299 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.023485 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.023655 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.023777 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:57Z","lastTransitionTime":"2026-01-31T14:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.127275 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.127331 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.127342 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.127361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.127372 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:57Z","lastTransitionTime":"2026-01-31T14:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.230544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.230601 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.230615 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.230638 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.230655 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:57Z","lastTransitionTime":"2026-01-31T14:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.334133 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.334175 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.334188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.334210 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.334227 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:57Z","lastTransitionTime":"2026-01-31T14:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.404516 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-03 18:01:54.246493654 +0000 UTC Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.404832 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.404865 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.404881 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.404881 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:57 crc kubenswrapper[4751]: E0131 14:42:57.404996 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:57 crc kubenswrapper[4751]: E0131 14:42:57.405168 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:57 crc kubenswrapper[4751]: E0131 14:42:57.405295 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:57 crc kubenswrapper[4751]: E0131 14:42:57.405420 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.436960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.437043 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.437056 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.437097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.437114 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:57Z","lastTransitionTime":"2026-01-31T14:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.540407 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.540739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.540839 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.540924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.540999 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:57Z","lastTransitionTime":"2026-01-31T14:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.643761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.643816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.643826 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.643843 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.643854 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:57Z","lastTransitionTime":"2026-01-31T14:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.746462 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.746520 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.746537 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.746560 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.746577 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:57Z","lastTransitionTime":"2026-01-31T14:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.849415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.849658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.849778 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.849862 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.850144 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:57Z","lastTransitionTime":"2026-01-31T14:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.952395 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.952743 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.952826 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.952917 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:57 crc kubenswrapper[4751]: I0131 14:42:57.952997 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:57Z","lastTransitionTime":"2026-01-31T14:42:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.055038 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.055339 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.055408 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.055489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.055559 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:58Z","lastTransitionTime":"2026-01-31T14:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.157609 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.157659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.157671 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.157689 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.157702 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:58Z","lastTransitionTime":"2026-01-31T14:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.263240 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.263324 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.263355 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.263409 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.263435 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:58Z","lastTransitionTime":"2026-01-31T14:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.366394 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.366443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.366460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.366484 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.366500 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:58Z","lastTransitionTime":"2026-01-31T14:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.405537 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 01:26:27.520264792 +0000 UTC Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.469685 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.469767 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.469791 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.469816 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.469835 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:58Z","lastTransitionTime":"2026-01-31T14:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.573170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.573552 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.573704 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.573845 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.573962 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:58Z","lastTransitionTime":"2026-01-31T14:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.676646 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.676971 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.677146 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.677280 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.677393 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:58Z","lastTransitionTime":"2026-01-31T14:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.780674 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.780722 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.780735 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.780752 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.780765 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:58Z","lastTransitionTime":"2026-01-31T14:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.884019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.884132 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.884157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.884188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.884210 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:58Z","lastTransitionTime":"2026-01-31T14:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.987022 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.987086 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.987100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.987118 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:58 crc kubenswrapper[4751]: I0131 14:42:58.987131 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:58Z","lastTransitionTime":"2026-01-31T14:42:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.089322 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.089356 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.089367 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.089382 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.089392 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:59Z","lastTransitionTime":"2026-01-31T14:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.192498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.192539 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.192551 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.192566 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.192577 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:59Z","lastTransitionTime":"2026-01-31T14:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.295051 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.295116 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.295127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.295145 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.295157 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:59Z","lastTransitionTime":"2026-01-31T14:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.398411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.398453 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.398463 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.398477 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.398487 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:59Z","lastTransitionTime":"2026-01-31T14:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.404779 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.404863 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.404985 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:42:59 crc kubenswrapper[4751]: E0131 14:42:59.404871 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:42:59 crc kubenswrapper[4751]: E0131 14:42:59.405116 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.404938 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:42:59 crc kubenswrapper[4751]: E0131 14:42:59.405295 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:42:59 crc kubenswrapper[4751]: E0131 14:42:59.405397 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.406222 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 20:48:41.556041555 +0000 UTC Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.406836 4751 scope.go:117] "RemoveContainer" containerID="307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.501480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.501519 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.501531 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.501550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.501562 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:59Z","lastTransitionTime":"2026-01-31T14:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.604621 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.605128 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.605147 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.605170 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.605218 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:59Z","lastTransitionTime":"2026-01-31T14:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.709235 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.709300 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.709318 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.709344 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.709362 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:59Z","lastTransitionTime":"2026-01-31T14:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.812248 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.812413 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.812437 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.812697 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.812744 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:59Z","lastTransitionTime":"2026-01-31T14:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.915654 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.915702 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.915715 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.915734 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.915749 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:42:59Z","lastTransitionTime":"2026-01-31T14:42:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.938865 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/2.log" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.942225 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerStarted","Data":"373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac"} Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.942769 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.959542 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.976309 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:42:59 crc kubenswrapper[4751]: I0131 14:42:59.994267 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bdcbdac0cc4b17e027947c041a0ee4a7d7f549aa6dbe5c07c370ca7c0c50475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:49Z\\\",\\\"message\\\":\\\"2026-01-31T14:42:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508\\\\n2026-01-31T14:42:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508 to /host/opt/cni/bin/\\\\n2026-01-31T14:42:04Z [verbose] multus-daemon started\\\\n2026-01-31T14:42:04Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:42:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:42:59Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.013779 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.018373 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.018398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.018407 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.018422 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.018434 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:00Z","lastTransitionTime":"2026-01-31T14:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.030848 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.047619 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.058458 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.070029 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.089468 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.108858 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.121314 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.121369 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.121379 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.121399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.121411 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:00Z","lastTransitionTime":"2026-01-31T14:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.141242 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:29Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 14:42:29.388826 6425 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:42:29.388860 6425 services_controller.go:444] Built service openshift-etcd/etcd LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:42:29.388874 6425 services_controller.go:445] Built service openshift-etcd/etcd LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:42:29.388002 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.157388 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.171711 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64468352-f9fe-48bb-b204-b9f828c06bf8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56c1e31014f9e3d0be8140f58cff1c752ad4be1c6c60a942bc18320bbd37b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a7c5739a571e6f3ec88c3798ad2604382b9320c44ddda3d41681a64c6ab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a6478c4477b785bcb405d597f1c835faaf4ef7adb3a2bcd6e70cc2e692f44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.191012 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.211041 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.224728 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.224762 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.224771 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.224787 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.224797 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:00Z","lastTransitionTime":"2026-01-31T14:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.227971 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.246507 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.328054 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.328292 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.328353 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.328450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.328507 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:00Z","lastTransitionTime":"2026-01-31T14:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.406789 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 23:13:41.196467546 +0000 UTC Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.420036 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.431306 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.431350 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.431367 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.431387 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.431405 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:00Z","lastTransitionTime":"2026-01-31T14:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.534019 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.534430 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.534561 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.534684 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.534803 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:00Z","lastTransitionTime":"2026-01-31T14:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.639346 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.639398 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.639413 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.639436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.639501 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:00Z","lastTransitionTime":"2026-01-31T14:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.742625 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.742705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.742729 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.742759 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.742839 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:00Z","lastTransitionTime":"2026-01-31T14:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.846213 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.846461 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.846715 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.846943 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.847160 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:00Z","lastTransitionTime":"2026-01-31T14:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.949159 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.950131 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.950447 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.950037 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/3.log" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.950742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.950988 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:00Z","lastTransitionTime":"2026-01-31T14:43:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.952060 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/2.log" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.955525 4751 generic.go:334] "Generic (PLEG): container finished" podID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerID="373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac" exitCode=1 Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.956369 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerDied","Data":"373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac"} Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.956473 4751 scope.go:117] "RemoveContainer" containerID="307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.957354 4751 scope.go:117] "RemoveContainer" containerID="373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac" Jan 31 14:43:00 crc kubenswrapper[4751]: E0131 14:43:00.957617 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" Jan 31 14:43:00 crc kubenswrapper[4751]: I0131 14:43:00.974954 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:00Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.006223 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://307462c29efce1de33c4d50cdbc74a6cb86d00b128964370024b108640b05b5e\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:29Z\\\",\\\"message\\\":\\\"t:\\\\u003cnil\\\\u003e Where:[where column _uuid == {e3c4661a-36a6-47f0-a6c0-a4ee741f2224}] Until: Durable:\\\\u003cnil\\\\u003e Comment:\\\\u003cnil\\\\u003e Lock:\\\\u003cnil\\\\u003e UUID: UUIDName:}]\\\\nI0131 14:42:29.388826 6425 services_controller.go:443] Built service openshift-etcd/etcd LB cluster-wide configs for network=default: []services.lbConfig{services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:2379, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}, services.lbConfig{vips:[]string{\\\\\\\"10.217.5.253\\\\\\\"}, protocol:\\\\\\\"TCP\\\\\\\", inport:9979, clusterEndpoints:services.lbEndpoints{Port:0, V4IPs:[]string(nil), V6IPs:[]string(nil)}, nodeEndpoints:map[string]services.lbEndpoints{}, externalTrafficLocal:false, internalTrafficLocal:false, hasNodePort:false}}\\\\nI0131 14:42:29.388860 6425 services_controller.go:444] Built service openshift-etcd/etcd LB per-node configs for network=default: []services.lbConfig(nil)\\\\nI0131 14:42:29.388874 6425 services_controller.go:445] Built service openshift-etcd/etcd LB template configs for network=default: []services.lbConfig(nil)\\\\nF0131 14:42:29.388002 6425 ovnkube.go:137] failed to run ovnkube: [failed to start network controller: failed to start default network controller: unable to create\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:43:00Z\\\",\\\"message\\\":\\\"t handler 5\\\\nI0131 14:43:00.256116 6857 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:43:00.256190 6857 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:43:00.256429 6857 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:43:00.256447 6857 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:43:00.256455 6857 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:43:00.256450 6857 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 14:43:00.256470 6857 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:43:00.256481 6857 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:43:00.256463 6857 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:43:00.256712 6857 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.026201 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.046974 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.053417 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.053457 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.053475 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.053498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.053516 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:01Z","lastTransitionTime":"2026-01-31T14:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.067792 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64468352-f9fe-48bb-b204-b9f828c06bf8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56c1e31014f9e3d0be8140f58cff1c752ad4be1c6c60a942bc18320bbd37b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a7c5739a571e6f3ec88c3798ad2604382b9320c44ddda3d41681a64c6ab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a6478c4477b785bcb405d597f1c835faaf4ef7adb3a2bcd6e70cc2e692f44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.088361 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.106420 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.124894 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.141909 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.156641 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.156679 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.156691 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.156712 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.156729 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:01Z","lastTransitionTime":"2026-01-31T14:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.157364 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af117b1f-6308-4303-bff0-ebc3a310c356\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9f62b49c0d916da6e1631f3216d52fd37ab407e878dc0509ccb19d0e5fb1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0845dfce4ee156b5b52e07b6257d62908413eba9570b3767b9f00724e81e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0845dfce4ee156b5b52e07b6257d62908413eba9570b3767b9f00724e81e034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.174768 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.198034 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bdcbdac0cc4b17e027947c041a0ee4a7d7f549aa6dbe5c07c370ca7c0c50475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:49Z\\\",\\\"message\\\":\\\"2026-01-31T14:42:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508\\\\n2026-01-31T14:42:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508 to /host/opt/cni/bin/\\\\n2026-01-31T14:42:04Z [verbose] multus-daemon started\\\\n2026-01-31T14:42:04Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:42:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.223337 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.242990 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.264812 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.265051 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.265186 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.265276 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.265351 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:01Z","lastTransitionTime":"2026-01-31T14:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.267275 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.281629 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.296463 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.315697 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.338590 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.338766 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.338737758 +0000 UTC m=+149.713450653 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.338850 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.338889 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.338931 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.338969 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.339135 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.339177 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.339198 4751 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.339234 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.339244 4751 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.339260 4751 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.339436 4751 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.339272 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.339246101 +0000 UTC m=+149.713959006 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.339514 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.339497707 +0000 UTC m=+149.714210592 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.339526 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.339520578 +0000 UTC m=+149.714233453 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.339150 4751 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.339747 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.339689602 +0000 UTC m=+149.714402517 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.370583 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.370673 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.370693 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.370723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.370749 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:01Z","lastTransitionTime":"2026-01-31T14:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.405273 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.405432 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.405508 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.405560 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.405526 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.405696 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.405819 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.405868 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.408279 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-06 23:54:06.89267671 +0000 UTC Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.473736 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.473787 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.473799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.473817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.473828 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:01Z","lastTransitionTime":"2026-01-31T14:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.576996 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.577084 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.577100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.577124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.577141 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:01Z","lastTransitionTime":"2026-01-31T14:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.679828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.679915 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.679933 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.679962 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.679980 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:01Z","lastTransitionTime":"2026-01-31T14:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.783577 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.783911 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.784062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.784236 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.784382 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:01Z","lastTransitionTime":"2026-01-31T14:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.887806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.887850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.887860 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.887878 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.887889 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:01Z","lastTransitionTime":"2026-01-31T14:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.961591 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/3.log" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.967548 4751 scope.go:117] "RemoveContainer" containerID="373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac" Jan 31 14:43:01 crc kubenswrapper[4751]: E0131 14:43:01.967833 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.992809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.992859 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.992870 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.992888 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.992900 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:01Z","lastTransitionTime":"2026-01-31T14:43:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:01 crc kubenswrapper[4751]: I0131 14:43:01.992803 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:01Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.006143 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.020748 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.035512 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.051582 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.095443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.095483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.095492 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.095507 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.095517 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:02Z","lastTransitionTime":"2026-01-31T14:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.108472 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:43:00Z\\\",\\\"message\\\":\\\"t handler 5\\\\nI0131 14:43:00.256116 6857 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:43:00.256190 6857 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:43:00.256429 6857 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:43:00.256447 6857 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:43:00.256455 6857 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:43:00.256450 6857 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 14:43:00.256470 6857 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:43:00.256481 6857 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:43:00.256463 6857 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:43:00.256712 6857 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.121820 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64468352-f9fe-48bb-b204-b9f828c06bf8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56c1e31014f9e3d0be8140f58cff1c752ad4be1c6c60a942bc18320bbd37b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a7c5739a571e6f3ec88c3798ad2604382b9320c44ddda3d41681a64c6ab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a6478c4477b785bcb405d597f1c835faaf4ef7adb3a2bcd6e70cc2e692f44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.140772 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.156664 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.176147 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.192559 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.198734 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.198775 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.198787 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.198844 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.198859 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:02Z","lastTransitionTime":"2026-01-31T14:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.214478 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af117b1f-6308-4303-bff0-ebc3a310c356\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9f62b49c0d916da6e1631f3216d52fd37ab407e878dc0509ccb19d0e5fb1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0845dfce4ee156b5b52e07b6257d62908413eba9570b3767b9f00724e81e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0845dfce4ee156b5b52e07b6257d62908413eba9570b3767b9f00724e81e034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.233030 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.257455 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.273776 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.292747 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.302244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.302323 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.302349 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.302380 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.302469 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:02Z","lastTransitionTime":"2026-01-31T14:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.310362 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.330504 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bdcbdac0cc4b17e027947c041a0ee4a7d7f549aa6dbe5c07c370ca7c0c50475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:49Z\\\",\\\"message\\\":\\\"2026-01-31T14:42:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508\\\\n2026-01-31T14:42:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508 to /host/opt/cni/bin/\\\\n2026-01-31T14:42:04Z [verbose] multus-daemon started\\\\n2026-01-31T14:42:04Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:42:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:02Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.405426 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.405507 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.405519 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.405535 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.405547 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:02Z","lastTransitionTime":"2026-01-31T14:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.409594 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 13:05:18.724521635 +0000 UTC Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.426615 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.509277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.509360 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.509387 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.509418 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.509442 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:02Z","lastTransitionTime":"2026-01-31T14:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.612389 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.612450 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.612468 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.612492 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.612509 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:02Z","lastTransitionTime":"2026-01-31T14:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.714844 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.714903 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.714915 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.714934 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.714947 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:02Z","lastTransitionTime":"2026-01-31T14:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.817961 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.818001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.818012 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.818097 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.818112 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:02Z","lastTransitionTime":"2026-01-31T14:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.920541 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.920592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.920609 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.920634 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:02 crc kubenswrapper[4751]: I0131 14:43:02.920652 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:02Z","lastTransitionTime":"2026-01-31T14:43:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.024174 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.024221 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.024236 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.024255 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.024266 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:03Z","lastTransitionTime":"2026-01-31T14:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.127849 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.127893 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.127907 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.127927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.127941 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:03Z","lastTransitionTime":"2026-01-31T14:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.230906 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.230950 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.230961 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.230977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.230989 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:03Z","lastTransitionTime":"2026-01-31T14:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.333115 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.333149 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.333159 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.333174 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.333184 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:03Z","lastTransitionTime":"2026-01-31T14:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.405011 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.405050 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.405058 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.405031 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:03 crc kubenswrapper[4751]: E0131 14:43:03.405201 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:03 crc kubenswrapper[4751]: E0131 14:43:03.405301 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:03 crc kubenswrapper[4751]: E0131 14:43:03.405415 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:03 crc kubenswrapper[4751]: E0131 14:43:03.405495 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.409738 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 23:32:30.049192764 +0000 UTC Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.436528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.436566 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.436575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.436592 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.436604 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:03Z","lastTransitionTime":"2026-01-31T14:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.539909 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.539961 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.539977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.539999 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.540016 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:03Z","lastTransitionTime":"2026-01-31T14:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.644296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.644355 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.644375 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.644397 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.644414 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:03Z","lastTransitionTime":"2026-01-31T14:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.747415 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.747466 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.747486 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.747510 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.747529 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:03Z","lastTransitionTime":"2026-01-31T14:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.850296 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.850386 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.850403 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.850429 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.850447 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:03Z","lastTransitionTime":"2026-01-31T14:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.953145 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.953194 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.953211 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.953236 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:03 crc kubenswrapper[4751]: I0131 14:43:03.953253 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:03Z","lastTransitionTime":"2026-01-31T14:43:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.056379 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.056440 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.056458 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.056482 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.056501 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:04Z","lastTransitionTime":"2026-01-31T14:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.083446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.083532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.083551 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.083606 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.083622 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:04Z","lastTransitionTime":"2026-01-31T14:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:04 crc kubenswrapper[4751]: E0131 14:43:04.105135 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.111107 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.111164 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.111174 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.111189 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.111226 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:04Z","lastTransitionTime":"2026-01-31T14:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:04 crc kubenswrapper[4751]: E0131 14:43:04.129645 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.134782 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.134858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.134881 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.134915 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.134942 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:04Z","lastTransitionTime":"2026-01-31T14:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:04 crc kubenswrapper[4751]: E0131 14:43:04.155755 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.160591 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.160632 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.160643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.160661 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.160671 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:04Z","lastTransitionTime":"2026-01-31T14:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:04 crc kubenswrapper[4751]: E0131 14:43:04.178965 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.184226 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.184285 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.184303 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.184328 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.184348 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:04Z","lastTransitionTime":"2026-01-31T14:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:04 crc kubenswrapper[4751]: E0131 14:43:04.203877 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:04Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:04Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:04 crc kubenswrapper[4751]: E0131 14:43:04.204182 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.206630 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.206705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.206718 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.206742 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.206758 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:04Z","lastTransitionTime":"2026-01-31T14:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.310528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.310574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.310590 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.310615 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.310634 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:04Z","lastTransitionTime":"2026-01-31T14:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.410661 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 08:40:46.994417485 +0000 UTC Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.415539 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.415612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.415629 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.415653 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.415670 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:04Z","lastTransitionTime":"2026-01-31T14:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.519622 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.519700 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.519726 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.519756 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.519776 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:04Z","lastTransitionTime":"2026-01-31T14:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.623576 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.623652 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.623676 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.623706 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.623729 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:04Z","lastTransitionTime":"2026-01-31T14:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.727723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.727788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.727806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.727834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.727850 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:04Z","lastTransitionTime":"2026-01-31T14:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.831581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.831673 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.831692 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.831719 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.831739 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:04Z","lastTransitionTime":"2026-01-31T14:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.935970 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.936023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.936035 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.936057 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:04 crc kubenswrapper[4751]: I0131 14:43:04.936087 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:04Z","lastTransitionTime":"2026-01-31T14:43:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.039156 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.039218 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.039241 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.039270 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.039291 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:05Z","lastTransitionTime":"2026-01-31T14:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.142921 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.142988 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.143009 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.143038 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.143058 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:05Z","lastTransitionTime":"2026-01-31T14:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.246316 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.246374 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.246392 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.246413 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.246431 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:05Z","lastTransitionTime":"2026-01-31T14:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.349533 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.349607 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.349632 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.349659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.349724 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:05Z","lastTransitionTime":"2026-01-31T14:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.405800 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.405828 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.405908 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.405908 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:05 crc kubenswrapper[4751]: E0131 14:43:05.406114 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:05 crc kubenswrapper[4751]: E0131 14:43:05.406528 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:05 crc kubenswrapper[4751]: E0131 14:43:05.406624 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:05 crc kubenswrapper[4751]: E0131 14:43:05.406768 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.411817 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 04:49:47.597623665 +0000 UTC Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.452773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.452837 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.452857 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.452885 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.452904 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:05Z","lastTransitionTime":"2026-01-31T14:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.557123 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.557181 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.557192 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.557213 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.557225 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:05Z","lastTransitionTime":"2026-01-31T14:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.660776 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.660834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.660852 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.660878 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.660898 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:05Z","lastTransitionTime":"2026-01-31T14:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.764181 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.764247 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.764265 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.764290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.764308 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:05Z","lastTransitionTime":"2026-01-31T14:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.866427 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.866497 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.866514 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.866544 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.866563 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:05Z","lastTransitionTime":"2026-01-31T14:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.968699 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.968764 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.968784 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.968813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:05 crc kubenswrapper[4751]: I0131 14:43:05.968832 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:05Z","lastTransitionTime":"2026-01-31T14:43:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.071993 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.072059 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.072145 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.072172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.072189 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:06Z","lastTransitionTime":"2026-01-31T14:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.176110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.176173 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.176187 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.176210 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.176227 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:06Z","lastTransitionTime":"2026-01-31T14:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.279792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.279881 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.279899 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.279923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.279970 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:06Z","lastTransitionTime":"2026-01-31T14:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.383207 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.383265 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.383281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.383304 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.383325 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:06Z","lastTransitionTime":"2026-01-31T14:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.411993 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:41:31.866259288 +0000 UTC Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.426792 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bdcbdac0cc4b17e027947c041a0ee4a7d7f549aa6dbe5c07c370ca7c0c50475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:49Z\\\",\\\"message\\\":\\\"2026-01-31T14:42:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508\\\\n2026-01-31T14:42:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508 to /host/opt/cni/bin/\\\\n2026-01-31T14:42:04Z [verbose] multus-daemon started\\\\n2026-01-31T14:42:04Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:42:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.447464 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.461486 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.478790 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.486258 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.486324 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.486343 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.486390 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.486410 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:06Z","lastTransitionTime":"2026-01-31T14:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.493219 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.510031 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.533288 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.543369 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.565400 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:43:00Z\\\",\\\"message\\\":\\\"t handler 5\\\\nI0131 14:43:00.256116 6857 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:43:00.256190 6857 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:43:00.256429 6857 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:43:00.256447 6857 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:43:00.256455 6857 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:43:00.256450 6857 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 14:43:00.256470 6857 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:43:00.256481 6857 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:43:00.256463 6857 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:43:00.256712 6857 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.580558 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.589640 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.589685 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.589702 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.589727 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.589744 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:06Z","lastTransitionTime":"2026-01-31T14:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.593734 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.612873 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.625575 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64468352-f9fe-48bb-b204-b9f828c06bf8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56c1e31014f9e3d0be8140f58cff1c752ad4be1c6c60a942bc18320bbd37b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a7c5739a571e6f3ec88c3798ad2604382b9320c44ddda3d41681a64c6ab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a6478c4477b785bcb405d597f1c835faaf4ef7adb3a2bcd6e70cc2e692f44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.639553 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.656063 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.670446 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.682212 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.695210 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.695273 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.696271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.696310 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.696331 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:06Z","lastTransitionTime":"2026-01-31T14:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.696664 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af117b1f-6308-4303-bff0-ebc3a310c356\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9f62b49c0d916da6e1631f3216d52fd37ab407e878dc0509ccb19d0e5fb1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0845dfce4ee156b5b52e07b6257d62908413eba9570b3767b9f00724e81e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0845dfce4ee156b5b52e07b6257d62908413eba9570b3767b9f00724e81e034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.728168 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23027958-cbc9-4206-8dd5-13f10df7f298\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4a4eb52c2c850f91c212fdc556452ab8cc91168ddb67c2078b806d8725be2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e66ea760a35f4e073d5ead7b0270164010b4dd14737b23202f83a10290f75d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa739a6a66bd2196c9131cf929bdb8a133e3e40c3dfa9a105bb3ea33fa2ede20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92d196e489f72bd3c04ada6d0ea993f0ad89eb42497efc8723720ca3a7720509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b0fe57d51f2684ba60b1818c1e3010e5364c6d196433972b46cb3c3f9b5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccd9efb7096722c8a48318444b235a1970fbec711faf7448d47696ff84da5d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccd9efb7096722c8a48318444b235a1970fbec711faf7448d47696ff84da5d37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1020dca4733e38925646f97eb80524c4060630e33323e9a5a0fdc4634c6b468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1020dca4733e38925646f97eb80524c4060630e33323e9a5a0fdc4634c6b468\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bf0f78147bc50d98a5ba239c2456467778fb4724433d914b9ee4300ce3af6e4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf0f78147bc50d98a5ba239c2456467778fb4724433d914b9ee4300ce3af6e4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:06Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.800226 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.800278 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.800295 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.800322 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.800342 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:06Z","lastTransitionTime":"2026-01-31T14:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.904211 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.904281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.904302 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.904332 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:06 crc kubenswrapper[4751]: I0131 14:43:06.904354 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:06Z","lastTransitionTime":"2026-01-31T14:43:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.006427 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.006916 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.007212 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.007435 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.007653 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:07Z","lastTransitionTime":"2026-01-31T14:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.111244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.111308 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.111329 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.111356 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.111375 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:07Z","lastTransitionTime":"2026-01-31T14:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.214924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.214969 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.214981 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.214999 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.215012 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:07Z","lastTransitionTime":"2026-01-31T14:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.317423 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.317476 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.317492 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.317513 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.317529 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:07Z","lastTransitionTime":"2026-01-31T14:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.405804 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.405910 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.405921 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:07 crc kubenswrapper[4751]: E0131 14:43:07.406538 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:07 crc kubenswrapper[4751]: E0131 14:43:07.406710 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.406021 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:07 crc kubenswrapper[4751]: E0131 14:43:07.406918 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:07 crc kubenswrapper[4751]: E0131 14:43:07.406910 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.412837 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 04:48:19.34428897 +0000 UTC Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.421004 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.421102 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.421127 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.421159 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.421185 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:07Z","lastTransitionTime":"2026-01-31T14:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.524924 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.525007 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.525025 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.525050 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.525113 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:07Z","lastTransitionTime":"2026-01-31T14:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.627672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.627707 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.627716 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.627730 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.627740 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:07Z","lastTransitionTime":"2026-01-31T14:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.731641 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.731704 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.731725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.731753 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.731771 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:07Z","lastTransitionTime":"2026-01-31T14:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.835055 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.835145 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.835163 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.835188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.835205 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:07Z","lastTransitionTime":"2026-01-31T14:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.938139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.938191 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.938203 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.938221 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:07 crc kubenswrapper[4751]: I0131 14:43:07.938233 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:07Z","lastTransitionTime":"2026-01-31T14:43:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.040809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.040877 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.040896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.040920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.040938 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:08Z","lastTransitionTime":"2026-01-31T14:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.143604 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.143635 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.143643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.143656 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.143667 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:08Z","lastTransitionTime":"2026-01-31T14:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.245688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.245762 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.245788 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.245814 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.245831 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:08Z","lastTransitionTime":"2026-01-31T14:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.347936 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.347990 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.348008 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.348032 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.348048 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:08Z","lastTransitionTime":"2026-01-31T14:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.413763 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-24 23:02:36.237980655 +0000 UTC Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.449991 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.450016 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.450023 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.450035 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.450043 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:08Z","lastTransitionTime":"2026-01-31T14:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.553116 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.553191 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.553212 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.553248 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.553270 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:08Z","lastTransitionTime":"2026-01-31T14:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.656306 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.656350 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.656361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.656378 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.656390 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:08Z","lastTransitionTime":"2026-01-31T14:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.758606 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.758674 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.758692 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.758718 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.758736 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:08Z","lastTransitionTime":"2026-01-31T14:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.860787 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.860826 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.860834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.860847 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.860856 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:08Z","lastTransitionTime":"2026-01-31T14:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.963648 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.963727 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.963747 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.963770 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:08 crc kubenswrapper[4751]: I0131 14:43:08.963787 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:08Z","lastTransitionTime":"2026-01-31T14:43:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.066326 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.066364 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.066375 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.066390 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.066402 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:09Z","lastTransitionTime":"2026-01-31T14:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.168898 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.168937 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.168948 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.168995 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.169012 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:09Z","lastTransitionTime":"2026-01-31T14:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.272532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.272581 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.272597 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.272619 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.272635 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:09Z","lastTransitionTime":"2026-01-31T14:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.376017 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.376059 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.376091 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.376110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.376122 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:09Z","lastTransitionTime":"2026-01-31T14:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.405190 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.405216 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.405302 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:09 crc kubenswrapper[4751]: E0131 14:43:09.405343 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.405383 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:09 crc kubenswrapper[4751]: E0131 14:43:09.405513 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:09 crc kubenswrapper[4751]: E0131 14:43:09.405606 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:09 crc kubenswrapper[4751]: E0131 14:43:09.405691 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.413853 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 09:39:50.737406983 +0000 UTC Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.479600 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.479702 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.479723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.479785 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.479805 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:09Z","lastTransitionTime":"2026-01-31T14:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.582830 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.582891 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.582912 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.582939 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.582956 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:09Z","lastTransitionTime":"2026-01-31T14:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.685335 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.685400 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.685439 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.685470 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.685494 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:09Z","lastTransitionTime":"2026-01-31T14:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.787586 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.787640 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.787701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.787727 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.787744 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:09Z","lastTransitionTime":"2026-01-31T14:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.890573 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.890640 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.890662 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.890692 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.890713 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:09Z","lastTransitionTime":"2026-01-31T14:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.993957 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.994020 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.994043 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.994105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:09 crc kubenswrapper[4751]: I0131 14:43:09.994135 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:09Z","lastTransitionTime":"2026-01-31T14:43:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.097263 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.097317 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.097336 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.097359 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.097377 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:10Z","lastTransitionTime":"2026-01-31T14:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.199615 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.199650 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.199658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.199673 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.199687 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:10Z","lastTransitionTime":"2026-01-31T14:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.302921 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.302972 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.302990 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.303013 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.303030 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:10Z","lastTransitionTime":"2026-01-31T14:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.405734 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.405769 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.405780 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.405793 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.405805 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:10Z","lastTransitionTime":"2026-01-31T14:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.414489 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 08:09:51.858982127 +0000 UTC Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.508668 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.508732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.508748 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.508773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.508794 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:10Z","lastTransitionTime":"2026-01-31T14:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.611605 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.611670 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.611686 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.611711 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.611728 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:10Z","lastTransitionTime":"2026-01-31T14:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.714565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.714610 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.714618 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.714632 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.714641 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:10Z","lastTransitionTime":"2026-01-31T14:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.818115 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.818179 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.818202 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.818231 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.818252 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:10Z","lastTransitionTime":"2026-01-31T14:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.921836 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.921907 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.921932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.921960 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:10 crc kubenswrapper[4751]: I0131 14:43:10.921986 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:10Z","lastTransitionTime":"2026-01-31T14:43:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.024801 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.024876 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.024902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.024931 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.024955 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:11Z","lastTransitionTime":"2026-01-31T14:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.127910 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.127977 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.127999 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.128028 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.128052 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:11Z","lastTransitionTime":"2026-01-31T14:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.231014 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.231101 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.231120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.231143 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.231160 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:11Z","lastTransitionTime":"2026-01-31T14:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.334201 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.334493 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.334514 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.334538 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.334556 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:11Z","lastTransitionTime":"2026-01-31T14:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.405209 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.405274 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.405300 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:11 crc kubenswrapper[4751]: E0131 14:43:11.405446 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.405486 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:11 crc kubenswrapper[4751]: E0131 14:43:11.405812 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:11 crc kubenswrapper[4751]: E0131 14:43:11.405896 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:11 crc kubenswrapper[4751]: E0131 14:43:11.406003 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.415595 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 17:28:56.781087954 +0000 UTC Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.437293 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.437373 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.437400 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.437430 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:11 crc kubenswrapper[4751]: I0131 14:43:11.437457 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:11Z","lastTransitionTime":"2026-01-31T14:43:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:12 crc kubenswrapper[4751]: I0131 14:43:12.799399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:12 crc kubenswrapper[4751]: I0131 14:43:12.799443 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:12 crc kubenswrapper[4751]: I0131 14:43:12.799459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:12 crc kubenswrapper[4751]: I0131 14:43:12.799483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:12 crc kubenswrapper[4751]: I0131 14:43:12.799500 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:12Z","lastTransitionTime":"2026-01-31T14:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:12 crc kubenswrapper[4751]: I0131 14:43:12.814640 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 14:44:52.235904482 +0000 UTC Jan 31 14:43:12 crc kubenswrapper[4751]: I0131 14:43:12.819437 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:12 crc kubenswrapper[4751]: E0131 14:43:12.819639 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:12 crc kubenswrapper[4751]: I0131 14:43:12.819944 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:12 crc kubenswrapper[4751]: E0131 14:43:12.820048 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:12 crc kubenswrapper[4751]: I0131 14:43:12.820295 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:12 crc kubenswrapper[4751]: E0131 14:43:12.820478 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:12 crc kubenswrapper[4751]: I0131 14:43:12.820555 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:12 crc kubenswrapper[4751]: E0131 14:43:12.820730 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:12 crc kubenswrapper[4751]: I0131 14:43:12.902545 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:12 crc kubenswrapper[4751]: I0131 14:43:12.902630 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:12 crc kubenswrapper[4751]: I0131 14:43:12.902649 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:12 crc kubenswrapper[4751]: I0131 14:43:12.902672 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:12 crc kubenswrapper[4751]: I0131 14:43:12.902688 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:12Z","lastTransitionTime":"2026-01-31T14:43:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.005491 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.005537 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.005556 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.005578 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.005595 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:13Z","lastTransitionTime":"2026-01-31T14:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.108406 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.108463 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.108481 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.108508 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.108525 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:13Z","lastTransitionTime":"2026-01-31T14:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.210575 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.210608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.210616 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.210629 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.210638 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:13Z","lastTransitionTime":"2026-01-31T14:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.313766 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.313810 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.313828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.313850 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.313868 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:13Z","lastTransitionTime":"2026-01-31T14:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.417372 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.417411 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.417420 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.417435 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.417444 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:13Z","lastTransitionTime":"2026-01-31T14:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.520364 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.520434 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.520457 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.520488 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.520509 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:13Z","lastTransitionTime":"2026-01-31T14:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.623438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.623483 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.623495 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.623514 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.623526 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:13Z","lastTransitionTime":"2026-01-31T14:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.730225 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.730288 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.730306 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.730331 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.730350 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:13Z","lastTransitionTime":"2026-01-31T14:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.814889 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 11:18:17.208372652 +0000 UTC Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.832892 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.832958 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.832976 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.833001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.833018 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:13Z","lastTransitionTime":"2026-01-31T14:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.937031 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.937118 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.937143 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.937189 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:13 crc kubenswrapper[4751]: I0131 14:43:13.937214 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:13Z","lastTransitionTime":"2026-01-31T14:43:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.040834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.040885 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.040902 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.040927 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.040948 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:14Z","lastTransitionTime":"2026-01-31T14:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.144153 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.144233 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.144259 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.144290 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.144314 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:14Z","lastTransitionTime":"2026-01-31T14:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.247836 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.247887 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.247904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.247928 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.247946 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:14Z","lastTransitionTime":"2026-01-31T14:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.350479 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.350539 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.350555 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.350580 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.350674 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:14Z","lastTransitionTime":"2026-01-31T14:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.405119 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.405226 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.405261 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:14 crc kubenswrapper[4751]: E0131 14:43:14.405383 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.405426 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:14 crc kubenswrapper[4751]: E0131 14:43:14.405599 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:14 crc kubenswrapper[4751]: E0131 14:43:14.405761 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:14 crc kubenswrapper[4751]: E0131 14:43:14.405835 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.453597 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.453651 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.453670 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.453695 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.453712 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:14Z","lastTransitionTime":"2026-01-31T14:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.527769 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.527829 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.527844 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.527868 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.527892 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:14Z","lastTransitionTime":"2026-01-31T14:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:14 crc kubenswrapper[4751]: E0131 14:43:14.548658 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.553725 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.553777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.553794 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.553817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.553834 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:14Z","lastTransitionTime":"2026-01-31T14:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:14 crc kubenswrapper[4751]: E0131 14:43:14.573812 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.579172 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.579263 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.579312 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.579338 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.579355 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:14Z","lastTransitionTime":"2026-01-31T14:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:14 crc kubenswrapper[4751]: E0131 14:43:14.599713 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.605017 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.605064 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.605110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.605134 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.605151 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:14Z","lastTransitionTime":"2026-01-31T14:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:14 crc kubenswrapper[4751]: E0131 14:43:14.626449 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.632582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.632640 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.632658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.632683 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.632712 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:14Z","lastTransitionTime":"2026-01-31T14:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:14 crc kubenswrapper[4751]: E0131 14:43:14.652528 4751 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404560Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865360Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-31T14:43:14Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"2bc08d22-1e39-4800-b402-ea260cc19637\\\",\\\"systemUUID\\\":\\\"ad4951a2-dbbf-4c4e-af3f-cce3c25b01ef\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":false},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:14Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:14 crc kubenswrapper[4751]: E0131 14:43:14.652815 4751 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.654838 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.654893 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.654918 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.654946 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.654968 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:14Z","lastTransitionTime":"2026-01-31T14:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.757590 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.757658 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.757684 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.757712 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.757737 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:14Z","lastTransitionTime":"2026-01-31T14:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.815687 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 22:01:52.995827389 +0000 UTC Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.859903 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.859963 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.859984 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.860013 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.860033 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:14Z","lastTransitionTime":"2026-01-31T14:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.963465 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.963509 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.963525 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.963574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:14 crc kubenswrapper[4751]: I0131 14:43:14.963594 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:14Z","lastTransitionTime":"2026-01-31T14:43:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.066759 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.066833 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.066849 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.066873 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.066889 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:15Z","lastTransitionTime":"2026-01-31T14:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.170093 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.170144 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.170160 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.170183 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.170202 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:15Z","lastTransitionTime":"2026-01-31T14:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.273368 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.273434 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.273451 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.273477 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.273495 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:15Z","lastTransitionTime":"2026-01-31T14:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.376315 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.376361 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.376370 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.376410 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.376420 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:15Z","lastTransitionTime":"2026-01-31T14:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.480062 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.480145 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.480162 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.480185 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.480204 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:15Z","lastTransitionTime":"2026-01-31T14:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.583150 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.583275 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.583298 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.583327 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.583348 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:15Z","lastTransitionTime":"2026-01-31T14:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.686041 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.686416 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.686608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.686761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.686886 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:15Z","lastTransitionTime":"2026-01-31T14:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.790985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.791060 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.791108 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.791135 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.791154 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:15Z","lastTransitionTime":"2026-01-31T14:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.816752 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 21:13:21.548094262 +0000 UTC Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.894342 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.894417 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.894436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.894460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.894477 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:15Z","lastTransitionTime":"2026-01-31T14:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.997739 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.997798 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.997815 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.997841 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:15 crc kubenswrapper[4751]: I0131 14:43:15.997860 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:15Z","lastTransitionTime":"2026-01-31T14:43:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.101528 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.101585 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.101603 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.101629 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.101647 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:16Z","lastTransitionTime":"2026-01-31T14:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.204617 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.204683 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.204706 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.204766 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.204785 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:16Z","lastTransitionTime":"2026-01-31T14:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.307191 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.307244 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.307262 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.307285 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.307305 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:16Z","lastTransitionTime":"2026-01-31T14:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.405472 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.405508 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:16 crc kubenswrapper[4751]: E0131 14:43:16.405848 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.406110 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:16 crc kubenswrapper[4751]: E0131 14:43:16.406928 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:16 crc kubenswrapper[4751]: E0131 14:43:16.407003 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.407319 4751 scope.go:117] "RemoveContainer" containerID="373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac" Jan 31 14:43:16 crc kubenswrapper[4751]: E0131 14:43:16.407562 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.409127 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:16 crc kubenswrapper[4751]: E0131 14:43:16.409350 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.410105 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.410164 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.410188 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.410326 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.410353 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:16Z","lastTransitionTime":"2026-01-31T14:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.426168 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7adba94225f6d18961ccad4eab46916b607a2bff977a48d36fee6810aa4b7293\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://11d68e8091e3d18fff0b3c8f744c2e9002b6307e17815b221af7123715d8abf3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.444451 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-68hvr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"658471aa-68b2-478e-9522-ef5533009174\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a5c5f03d09a887caf686dc5e5d446c4589c7e577cd65e9329cd73d1f33af3d53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-nbl8x\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-68hvr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.475500 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:43:00Z\\\",\\\"message\\\":\\\"t handler 5\\\\nI0131 14:43:00.256116 6857 reflector.go:311] Stopping reflector *v1.AdminPolicyBasedExternalRoute (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:43:00.256190 6857 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0131 14:43:00.256429 6857 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0131 14:43:00.256447 6857 handler.go:208] Removed *v1.EgressFirewall event handler 9\\\\nI0131 14:43:00.256455 6857 handler.go:208] Removed *v1.Node event handler 2\\\\nI0131 14:43:00.256450 6857 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0131 14:43:00.256470 6857 handler.go:208] Removed *v1.NetworkPolicy event handler 4\\\\nI0131 14:43:00.256481 6857 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0131 14:43:00.256463 6857 handler.go:208] Removed *v1.Node event handler 7\\\\nI0131 14:43:00.256712 6857 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:59Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-zhmb7\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-n8cdt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.496900 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:58Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39d4eee728dbd9b6b0ade5149fd481cb3a688a6856f56a3a25a6cb49c944ee27\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.513584 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.513645 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.513660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.513686 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.513703 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:16Z","lastTransitionTime":"2026-01-31T14:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.519110 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.538999 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.556952 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4c170e8-22c9-43a9-8b34-9d626c2ccddc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://d86496066c7ba97ec9852c160ecb2e25f49cb0b7ae16e667f6279be42fd52679\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fv47c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:01Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-2wpj7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.572704 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"af117b1f-6308-4303-bff0-ebc3a310c356\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3e9f62b49c0d916da6e1631f3216d52fd37ab407e878dc0509ccb19d0e5fb1df\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a0845dfce4ee156b5b52e07b6257d62908413eba9570b3767b9f00724e81e034\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a0845dfce4ee156b5b52e07b6257d62908413eba9570b3767b9f00724e81e034\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.608644 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"23027958-cbc9-4206-8dd5-13f10df7f298\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:40Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:51Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://7f4a4eb52c2c850f91c212fdc556452ab8cc91168ddb67c2078b806d8725be2a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://e66ea760a35f4e073d5ead7b0270164010b4dd14737b23202f83a10290f75d3c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://aa739a6a66bd2196c9131cf929bdb8a133e3e40c3dfa9a105bb3ea33fa2ede20\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://92d196e489f72bd3c04ada6d0ea993f0ad89eb42497efc8723720ca3a7720509\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:41Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c0b0fe57d51f2684ba60b1818c1e3010e5364c6d196433972b46cb3c3f9b5e61\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:40Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ccd9efb7096722c8a48318444b235a1970fbec711faf7448d47696ff84da5d37\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ccd9efb7096722c8a48318444b235a1970fbec711faf7448d47696ff84da5d37\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d1020dca4733e38925646f97eb80524c4060630e33323e9a5a0fdc4634c6b468\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d1020dca4733e38925646f97eb80524c4060630e33323e9a5a0fdc4634c6b468\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://bf0f78147bc50d98a5ba239c2456467778fb4724433d914b9ee4300ce3af6e4a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://bf0f78147bc50d98a5ba239c2456467778fb4724433d914b9ee4300ce3af6e4a\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.619182 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.619416 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.619622 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.619808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.619936 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:16Z","lastTransitionTime":"2026-01-31T14:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.632694 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6afb84a0-c564-45aa-b7a1-cd6f8273fe45\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:13Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-31T14:41:55Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0131 14:41:50.030836 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0131 14:41:50.033684 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-814856940/tls.crt::/tmp/serving-cert-814856940/tls.key\\\\\\\"\\\\nI0131 14:41:55.828999 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0131 14:41:55.832995 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0131 14:41:55.833028 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0131 14:41:55.833056 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0131 14:41:55.833065 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0131 14:41:55.854492 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0131 14:41:55.854645 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854713 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0131 14:41:55.854778 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0131 14:41:55.854833 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nI0131 14:41:55.854881 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0131 14:41:55.854891 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0131 14:41:55.855031 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0131 14:41:55.858196 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.652117 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"64468352-f9fe-48bb-b204-b9f828c06bf8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:38Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:28Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://c56c1e31014f9e3d0be8140f58cff1c752ad4be1c6c60a942bc18320bbd37b6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://f41a7c5739a571e6f3ec88c3798ad2604382b9320c44ddda3d41681a64c6ab2e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://a3a6478c4477b785bcb405d597f1c835faaf4ef7adb3a2bcd6e70cc2e692f44d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:39Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3cbc9dfffff59a784e871d47bf2ab1b8419b11840eb329ce21c0f7c24e6e77c9\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:41:37Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.669671 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"cd8c0730-67df-445e-a6ce-c2edce5d9c59\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:14Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://f27098d3c41dbb10f76ba04a8a989e91ff3eb6fe0fb0ca746e33417839235c5c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://34680c760b5c6a6e2a521731e962301b54aa3184d5d66792fb43e991c6502a53\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:15Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-45q8c\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:14Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-4v79q\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.688901 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b7a3883-0c5a-4116-adb0-ad25f69cd7f8\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:39Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:36Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://daae5a3d70247f744d33c832eaf3080a91e73ffe0b5327d111ec8e87175becdb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b9317b698da5ee8a5f40c046d810783e523f27b6806319023e57e08f5cae267c\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://6ff7cd215439083e1e83ef38a79946be1c7e9bbc20d66cdb931c456a21bbdf17\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:41:38Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:41:36Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.714478 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:01Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fc4ac60a9db8d22605139ad928fb8fd667aaba0b8563be03e796ad077fb525bf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.723465 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.723554 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.723574 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.723630 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.723649 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:16Z","lastTransitionTime":"2026-01-31T14:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.735760 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-rtthp" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e7dd989b-33df-4562-a60b-f273428fea3d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:50Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2bdcbdac0cc4b17e027947c041a0ee4a7d7f549aa6dbe5c07c370ca7c0c50475\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-31T14:42:49Z\\\",\\\"message\\\":\\\"2026-01-31T14:42:03+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508\\\\n2026-01-31T14:42:03+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_eaa2f7bd-5331-496e-8b4a-3784e7751508 to /host/opt/cni/bin/\\\\n2026-01-31T14:42:04Z [verbose] multus-daemon started\\\\n2026-01-31T14:42:04Z [verbose] Readiness Indicator file check\\\\n2026-01-31T14:42:49Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:50Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hwrtf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-rtthp\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.760733 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c5353863-ec39-4357-9b86-9be42ca17916\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://99c893782bcbcbd370030fe164c6849abe258ce88db65e3d22c269d20d0bef4e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:08Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://53c2988d52c487b3b7faab22afc5682e10470560a0ce155a557f45b416496b39\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:02Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:02Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c8c316d1ca94c88d30ed85232f3d03c8790ddbb3776a0d7215a504e0344903db\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d7d3c862143add96dd4656517f8402208608d33197fdb402a1cc3fde7f6c441e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:04Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://3c73fb39c6c9db93acb0fdc34044b1cafc05046a470e3897eeaa7fac8107691e\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd0a7bc51ee2198f74d0adbfa358d772dc5176a7671b6a551157ad301348181d\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2dbe202f26fa5c09ba45db2a0bbb7fd7796b91e665df4529eddfdabd6ce083fc\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-31T14:42:07Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-31T14:42:07Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-tgxmq\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:02Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-rp5sb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.781611 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-31T14:41:57Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.797047 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-lxrfr" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6b895e2a-7887-41c3-b641-9c72bb085dda\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b5d0546721c181de3a5f2f4e791dcaeba69f94467bfbf08b4470332a0b4b49fc\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-31T14:42:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s9hbh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:03Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-lxrfr\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.815774 4751 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"68aeb9c7-d3c3-4c34-96ab-bb947421c504\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-31T14:42:15Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-hljmn\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-31T14:42:15Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-xtn6l\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-31T14:43:16Z is after 2025-08-24T17:21:41Z" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.817914 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 00:20:45.253881276 +0000 UTC Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.826338 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.826390 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.826407 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.826432 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.826450 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:16Z","lastTransitionTime":"2026-01-31T14:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.929382 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.929428 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.929444 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.929465 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:16 crc kubenswrapper[4751]: I0131 14:43:16.929481 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:16Z","lastTransitionTime":"2026-01-31T14:43:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.032647 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.032690 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.032705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.032728 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.032745 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:17Z","lastTransitionTime":"2026-01-31T14:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.136196 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.136247 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.136264 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.136321 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.136341 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:17Z","lastTransitionTime":"2026-01-31T14:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.239464 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.239525 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.239534 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.239550 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.239559 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:17Z","lastTransitionTime":"2026-01-31T14:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.342923 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.342981 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.343000 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.343026 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.343046 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:17Z","lastTransitionTime":"2026-01-31T14:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.445787 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.445842 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.445858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.445878 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.445895 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:17Z","lastTransitionTime":"2026-01-31T14:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.549321 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.549391 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.549410 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.549434 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.549453 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:17Z","lastTransitionTime":"2026-01-31T14:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.653061 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.653157 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.653176 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.653200 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.653219 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:17Z","lastTransitionTime":"2026-01-31T14:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.756489 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.756549 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.756565 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.756589 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.756605 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:17Z","lastTransitionTime":"2026-01-31T14:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.818863 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 01:54:51.983733595 +0000 UTC Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.859793 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.859842 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.859858 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.859882 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.859899 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:17Z","lastTransitionTime":"2026-01-31T14:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.962583 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.962631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.962648 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.962670 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:17 crc kubenswrapper[4751]: I0131 14:43:17.962687 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:17Z","lastTransitionTime":"2026-01-31T14:43:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.065245 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.065339 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.065362 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.065440 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.065466 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:18Z","lastTransitionTime":"2026-01-31T14:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.168656 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.169140 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.169344 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.169532 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.169686 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:18Z","lastTransitionTime":"2026-01-31T14:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.272921 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.273197 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.273372 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.273524 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.273655 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:18Z","lastTransitionTime":"2026-01-31T14:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.376627 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.376723 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.376738 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.376792 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.376809 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:18Z","lastTransitionTime":"2026-01-31T14:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.405145 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.405243 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.405272 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:18 crc kubenswrapper[4751]: E0131 14:43:18.405436 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.405472 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:18 crc kubenswrapper[4751]: E0131 14:43:18.405572 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:18 crc kubenswrapper[4751]: E0131 14:43:18.405716 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:18 crc kubenswrapper[4751]: E0131 14:43:18.405954 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.478612 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.478648 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.478659 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.478675 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.478687 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:18Z","lastTransitionTime":"2026-01-31T14:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.581817 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.581939 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.581963 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.581997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.582022 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:18Z","lastTransitionTime":"2026-01-31T14:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.685181 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.685265 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.685292 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.685324 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.685345 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:18Z","lastTransitionTime":"2026-01-31T14:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.787897 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.787967 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.787992 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.788024 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.788046 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:18Z","lastTransitionTime":"2026-01-31T14:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.819449 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 00:46:12.457118546 +0000 UTC Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.891138 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.891216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.891242 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.891271 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.891292 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:18Z","lastTransitionTime":"2026-01-31T14:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.993339 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.993413 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.993438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.993468 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:18 crc kubenswrapper[4751]: I0131 14:43:18.993488 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:18Z","lastTransitionTime":"2026-01-31T14:43:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.095668 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.095722 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.095738 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.095761 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.095777 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:19Z","lastTransitionTime":"2026-01-31T14:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.198938 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.198997 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.199021 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.199051 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.199106 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:19Z","lastTransitionTime":"2026-01-31T14:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.302139 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.302194 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.302210 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.302234 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.302253 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:19Z","lastTransitionTime":"2026-01-31T14:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.405701 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.405786 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.405809 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.405834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.405856 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:19Z","lastTransitionTime":"2026-01-31T14:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.508359 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.508420 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.508436 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.508460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.508476 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:19Z","lastTransitionTime":"2026-01-31T14:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.611719 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.611773 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.611790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.611813 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.611831 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:19Z","lastTransitionTime":"2026-01-31T14:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.714539 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.714608 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.714631 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.714660 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.714684 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:19Z","lastTransitionTime":"2026-01-31T14:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.816715 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.816777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.816796 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.816820 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.816838 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:19Z","lastTransitionTime":"2026-01-31T14:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.819853 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 21:24:17.563291148 +0000 UTC Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.918864 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.918914 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.918932 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.918956 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.918975 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:19Z","lastTransitionTime":"2026-01-31T14:43:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:19 crc kubenswrapper[4751]: I0131 14:43:19.953315 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs\") pod \"network-metrics-daemon-xtn6l\" (UID: \"68aeb9c7-d3c3-4c34-96ab-bb947421c504\") " pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:19 crc kubenswrapper[4751]: E0131 14:43:19.953484 4751 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:43:19 crc kubenswrapper[4751]: E0131 14:43:19.953554 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs podName:68aeb9c7-d3c3-4c34-96ab-bb947421c504 nodeName:}" failed. No retries permitted until 2026-01-31 14:44:23.95353235 +0000 UTC m=+168.328245275 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs") pod "network-metrics-daemon-xtn6l" (UID: "68aeb9c7-d3c3-4c34-96ab-bb947421c504") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.021674 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.021719 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.021736 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.021758 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.021773 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:20Z","lastTransitionTime":"2026-01-31T14:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.125228 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.125286 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.125340 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.125367 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.125386 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:20Z","lastTransitionTime":"2026-01-31T14:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.228368 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.228448 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.228465 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.228899 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.228990 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:20Z","lastTransitionTime":"2026-01-31T14:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.332798 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.332863 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.332884 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.332915 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.332938 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:20Z","lastTransitionTime":"2026-01-31T14:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.405686 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.405822 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.406150 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.406193 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:20 crc kubenswrapper[4751]: E0131 14:43:20.406312 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:20 crc kubenswrapper[4751]: E0131 14:43:20.406512 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:20 crc kubenswrapper[4751]: E0131 14:43:20.406656 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:20 crc kubenswrapper[4751]: E0131 14:43:20.406819 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.436327 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.436383 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.436399 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.436425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.436446 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:20Z","lastTransitionTime":"2026-01-31T14:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.539177 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.539233 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.539249 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.539272 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.539289 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:20Z","lastTransitionTime":"2026-01-31T14:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.642634 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.642684 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.642700 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.642722 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.642738 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:20Z","lastTransitionTime":"2026-01-31T14:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.745570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.745648 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.745670 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.745698 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.745725 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:20Z","lastTransitionTime":"2026-01-31T14:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.820934 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 08:20:27.620610725 +0000 UTC Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.848058 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.848135 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.848152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.848176 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.848193 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:20Z","lastTransitionTime":"2026-01-31T14:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.951438 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.951538 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.951553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.951580 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:20 crc kubenswrapper[4751]: I0131 14:43:20.951611 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:20Z","lastTransitionTime":"2026-01-31T14:43:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.056141 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.056219 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.056243 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.056277 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.056300 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:21Z","lastTransitionTime":"2026-01-31T14:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.160100 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.160176 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.160195 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.160224 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.160243 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:21Z","lastTransitionTime":"2026-01-31T14:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.263799 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.263904 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.263920 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.263946 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.263968 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:21Z","lastTransitionTime":"2026-01-31T14:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.367480 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.367559 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.367584 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.367622 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.367651 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:21Z","lastTransitionTime":"2026-01-31T14:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.471498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.471570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.471620 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.471643 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.471659 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:21Z","lastTransitionTime":"2026-01-31T14:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.578234 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.578397 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.578425 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.578460 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.578497 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:21Z","lastTransitionTime":"2026-01-31T14:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.682866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.683001 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.683030 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.683124 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.683152 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:21Z","lastTransitionTime":"2026-01-31T14:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.786874 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.786946 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.786965 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.786993 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.787014 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:21Z","lastTransitionTime":"2026-01-31T14:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.821775 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 22:33:08.975076182 +0000 UTC Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.889806 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.889878 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.889901 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.889939 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.889966 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:21Z","lastTransitionTime":"2026-01-31T14:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.992705 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.992785 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.992808 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.992836 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:21 crc kubenswrapper[4751]: I0131 14:43:21.992859 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:21Z","lastTransitionTime":"2026-01-31T14:43:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.096731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.096777 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.096798 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.096822 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.096840 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:22Z","lastTransitionTime":"2026-01-31T14:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.201031 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.201128 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.201145 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.201173 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.201193 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:22Z","lastTransitionTime":"2026-01-31T14:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.304731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.304812 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.304834 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.304866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.304895 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:22Z","lastTransitionTime":"2026-01-31T14:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.405064 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.405168 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:22 crc kubenswrapper[4751]: E0131 14:43:22.405342 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.405439 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.405555 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:22 crc kubenswrapper[4751]: E0131 14:43:22.405686 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:22 crc kubenswrapper[4751]: E0131 14:43:22.405796 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:22 crc kubenswrapper[4751]: E0131 14:43:22.405892 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.408054 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.408126 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.408144 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.408166 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.408185 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:22Z","lastTransitionTime":"2026-01-31T14:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.511978 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.512032 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.512044 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.512093 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.512109 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:22Z","lastTransitionTime":"2026-01-31T14:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.615169 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.615222 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.615246 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.615276 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.615295 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:22Z","lastTransitionTime":"2026-01-31T14:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.718262 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.718329 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.718345 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.718368 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.718385 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:22Z","lastTransitionTime":"2026-01-31T14:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.821477 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.821535 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.821553 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.821582 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.821599 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:22Z","lastTransitionTime":"2026-01-31T14:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.822577 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 15:39:16.410631765 +0000 UTC Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.925035 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.925110 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.925128 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.925152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:22 crc kubenswrapper[4751]: I0131 14:43:22.925171 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:22Z","lastTransitionTime":"2026-01-31T14:43:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.027985 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.028048 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.028066 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.028120 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.028137 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:23Z","lastTransitionTime":"2026-01-31T14:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.131688 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.131776 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.131796 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.131828 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.131850 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:23Z","lastTransitionTime":"2026-01-31T14:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.235667 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.235731 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.235749 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.235774 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.235793 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:23Z","lastTransitionTime":"2026-01-31T14:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.338451 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.338506 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.338516 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.338536 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.338547 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:23Z","lastTransitionTime":"2026-01-31T14:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.442177 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.442240 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.442257 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.442281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.442298 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:23Z","lastTransitionTime":"2026-01-31T14:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.546119 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.546198 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.546217 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.546246 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.546269 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:23Z","lastTransitionTime":"2026-01-31T14:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.649477 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.649572 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.649598 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.649638 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.649665 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:23Z","lastTransitionTime":"2026-01-31T14:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.753216 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.753289 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.753310 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.753337 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.753356 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:23Z","lastTransitionTime":"2026-01-31T14:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.822781 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 19:01:42.810278795 +0000 UTC Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.857477 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.857543 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.857560 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.858529 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.858567 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:23Z","lastTransitionTime":"2026-01-31T14:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.962578 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.962998 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.963249 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.963446 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:23 crc kubenswrapper[4751]: I0131 14:43:23.963586 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:23Z","lastTransitionTime":"2026-01-31T14:43:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.067192 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.067589 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.067790 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.068033 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.068258 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:24Z","lastTransitionTime":"2026-01-31T14:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.171281 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.172024 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.172152 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.172202 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.172220 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:24Z","lastTransitionTime":"2026-01-31T14:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.275626 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.275696 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.275708 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.275732 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.275747 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:24Z","lastTransitionTime":"2026-01-31T14:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.379537 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.379597 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.379610 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.379630 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.379646 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:24Z","lastTransitionTime":"2026-01-31T14:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.405063 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.405118 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.405200 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.405123 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:24 crc kubenswrapper[4751]: E0131 14:43:24.405340 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:24 crc kubenswrapper[4751]: E0131 14:43:24.405463 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:24 crc kubenswrapper[4751]: E0131 14:43:24.405599 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:24 crc kubenswrapper[4751]: E0131 14:43:24.405924 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.483498 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.483570 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.483588 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.483617 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.483638 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:24Z","lastTransitionTime":"2026-01-31T14:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.587393 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.587459 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.587476 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.587502 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.587521 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:24Z","lastTransitionTime":"2026-01-31T14:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.690771 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.690844 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.690866 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.690896 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.690919 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:24Z","lastTransitionTime":"2026-01-31T14:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.793900 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.793954 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.793966 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.793984 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.793997 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:24Z","lastTransitionTime":"2026-01-31T14:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.822978 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-20 12:51:38.261430905 +0000 UTC Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.898042 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.898148 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.898167 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.898199 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.898223 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:24Z","lastTransitionTime":"2026-01-31T14:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.971887 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.971982 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.972010 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.972045 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 31 14:43:24 crc kubenswrapper[4751]: I0131 14:43:24.972111 4751 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-31T14:43:24Z","lastTransitionTime":"2026-01-31T14:43:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.044856 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v"] Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.045494 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.049132 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.049412 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.049569 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.051034 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.109770 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-lxrfr" podStartSLOduration=85.109739001 podStartE2EDuration="1m25.109739001s" podCreationTimestamp="2026-01-31 14:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:43:25.092456282 +0000 UTC m=+109.467169197" watchObservedRunningTime="2026-01-31 14:43:25.109739001 +0000 UTC m=+109.484451926" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.171625 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-68hvr" podStartSLOduration=85.171585481 podStartE2EDuration="1m25.171585481s" podCreationTimestamp="2026-01-31 14:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:43:25.170432141 +0000 UTC m=+109.545145056" watchObservedRunningTime="2026-01-31 14:43:25.171585481 +0000 UTC m=+109.546298406" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.227727 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c799c46b-62ca-4376-bcdf-6b77761ad60a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8ph7v\" (UID: \"c799c46b-62ca-4376-bcdf-6b77761ad60a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.227841 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c799c46b-62ca-4376-bcdf-6b77761ad60a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8ph7v\" (UID: \"c799c46b-62ca-4376-bcdf-6b77761ad60a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.228038 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c799c46b-62ca-4376-bcdf-6b77761ad60a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8ph7v\" (UID: \"c799c46b-62ca-4376-bcdf-6b77761ad60a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.228148 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c799c46b-62ca-4376-bcdf-6b77761ad60a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8ph7v\" (UID: \"c799c46b-62ca-4376-bcdf-6b77761ad60a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.228254 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c799c46b-62ca-4376-bcdf-6b77761ad60a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8ph7v\" (UID: \"c799c46b-62ca-4376-bcdf-6b77761ad60a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.245328 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=57.245296108 podStartE2EDuration="57.245296108s" podCreationTimestamp="2026-01-31 14:42:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:43:25.245008061 +0000 UTC m=+109.619720976" watchObservedRunningTime="2026-01-31 14:43:25.245296108 +0000 UTC m=+109.620009023" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.329669 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c799c46b-62ca-4376-bcdf-6b77761ad60a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8ph7v\" (UID: \"c799c46b-62ca-4376-bcdf-6b77761ad60a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.329731 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c799c46b-62ca-4376-bcdf-6b77761ad60a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8ph7v\" (UID: \"c799c46b-62ca-4376-bcdf-6b77761ad60a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.329787 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c799c46b-62ca-4376-bcdf-6b77761ad60a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8ph7v\" (UID: \"c799c46b-62ca-4376-bcdf-6b77761ad60a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.329872 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c799c46b-62ca-4376-bcdf-6b77761ad60a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8ph7v\" (UID: \"c799c46b-62ca-4376-bcdf-6b77761ad60a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.329928 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c799c46b-62ca-4376-bcdf-6b77761ad60a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8ph7v\" (UID: \"c799c46b-62ca-4376-bcdf-6b77761ad60a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.330031 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/c799c46b-62ca-4376-bcdf-6b77761ad60a-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-8ph7v\" (UID: \"c799c46b-62ca-4376-bcdf-6b77761ad60a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.330572 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/c799c46b-62ca-4376-bcdf-6b77761ad60a-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-8ph7v\" (UID: \"c799c46b-62ca-4376-bcdf-6b77761ad60a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.331467 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/c799c46b-62ca-4376-bcdf-6b77761ad60a-service-ca\") pod \"cluster-version-operator-5c965bbfc6-8ph7v\" (UID: \"c799c46b-62ca-4376-bcdf-6b77761ad60a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.331744 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podStartSLOduration=85.331716767 podStartE2EDuration="1m25.331716767s" podCreationTimestamp="2026-01-31 14:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:43:25.331704577 +0000 UTC m=+109.706417502" watchObservedRunningTime="2026-01-31 14:43:25.331716767 +0000 UTC m=+109.706429692" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.339807 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c799c46b-62ca-4376-bcdf-6b77761ad60a-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-8ph7v\" (UID: \"c799c46b-62ca-4376-bcdf-6b77761ad60a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.367045 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c799c46b-62ca-4376-bcdf-6b77761ad60a-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-8ph7v\" (UID: \"c799c46b-62ca-4376-bcdf-6b77761ad60a\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.374307 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.403500 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=25.403479234 podStartE2EDuration="25.403479234s" podCreationTimestamp="2026-01-31 14:43:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:43:25.357392085 +0000 UTC m=+109.732105000" watchObservedRunningTime="2026-01-31 14:43:25.403479234 +0000 UTC m=+109.778192129" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.407313 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=23.407294434 podStartE2EDuration="23.407294434s" podCreationTimestamp="2026-01-31 14:43:02 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:43:25.402780176 +0000 UTC m=+109.777493071" watchObservedRunningTime="2026-01-31 14:43:25.407294434 +0000 UTC m=+109.782007319" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.429642 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=88.429622255 podStartE2EDuration="1m28.429622255s" podCreationTimestamp="2026-01-31 14:41:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:43:25.425840426 +0000 UTC m=+109.800553321" watchObservedRunningTime="2026-01-31 14:43:25.429622255 +0000 UTC m=+109.804335150" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.468189 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-rp5sb" podStartSLOduration=84.468162277 podStartE2EDuration="1m24.468162277s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:43:25.453482915 +0000 UTC m=+109.828195810" watchObservedRunningTime="2026-01-31 14:43:25.468162277 +0000 UTC m=+109.842875192" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.487172 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=88.487149741 podStartE2EDuration="1m28.487149741s" podCreationTimestamp="2026-01-31 14:41:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:43:25.486873224 +0000 UTC m=+109.861586129" watchObservedRunningTime="2026-01-31 14:43:25.487149741 +0000 UTC m=+109.861862646" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.487486 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-4v79q" podStartSLOduration=84.48747655 podStartE2EDuration="1m24.48747655s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:43:25.468737672 +0000 UTC m=+109.843450577" watchObservedRunningTime="2026-01-31 14:43:25.48747655 +0000 UTC m=+109.862189445" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.514125 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-rtthp" podStartSLOduration=84.514060082 podStartE2EDuration="1m24.514060082s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:43:25.513890807 +0000 UTC m=+109.888603732" watchObservedRunningTime="2026-01-31 14:43:25.514060082 +0000 UTC m=+109.888772967" Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.824211 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 06:16:53.999549957 +0000 UTC Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.824317 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.835635 4751 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.870202 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" event={"ID":"c799c46b-62ca-4376-bcdf-6b77761ad60a","Type":"ContainerStarted","Data":"28aa82b3a8224fba26d4ab32d68fa1950fe98de42ee5b731dcf5aae8f47bfc4b"} Jan 31 14:43:25 crc kubenswrapper[4751]: I0131 14:43:25.870286 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" event={"ID":"c799c46b-62ca-4376-bcdf-6b77761ad60a","Type":"ContainerStarted","Data":"5446e418d148c28288fe46409508bc2b5114d5b8e44072195c980de3cedbc769"} Jan 31 14:43:26 crc kubenswrapper[4751]: I0131 14:43:26.405385 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:26 crc kubenswrapper[4751]: I0131 14:43:26.405483 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:26 crc kubenswrapper[4751]: E0131 14:43:26.407224 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:26 crc kubenswrapper[4751]: I0131 14:43:26.407272 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:26 crc kubenswrapper[4751]: E0131 14:43:26.407477 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:26 crc kubenswrapper[4751]: I0131 14:43:26.407539 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:26 crc kubenswrapper[4751]: E0131 14:43:26.407724 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:26 crc kubenswrapper[4751]: E0131 14:43:26.407563 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:28 crc kubenswrapper[4751]: I0131 14:43:28.404934 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:28 crc kubenswrapper[4751]: I0131 14:43:28.405050 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:28 crc kubenswrapper[4751]: E0131 14:43:28.405174 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:28 crc kubenswrapper[4751]: I0131 14:43:28.405050 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:28 crc kubenswrapper[4751]: E0131 14:43:28.405274 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:28 crc kubenswrapper[4751]: E0131 14:43:28.405336 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:28 crc kubenswrapper[4751]: I0131 14:43:28.405351 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:28 crc kubenswrapper[4751]: E0131 14:43:28.405445 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:29 crc kubenswrapper[4751]: I0131 14:43:29.406807 4751 scope.go:117] "RemoveContainer" containerID="373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac" Jan 31 14:43:29 crc kubenswrapper[4751]: E0131 14:43:29.407168 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-n8cdt_openshift-ovn-kubernetes(ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b)\"" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" Jan 31 14:43:30 crc kubenswrapper[4751]: I0131 14:43:30.405900 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:30 crc kubenswrapper[4751]: I0131 14:43:30.405972 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:30 crc kubenswrapper[4751]: I0131 14:43:30.406017 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:30 crc kubenswrapper[4751]: I0131 14:43:30.405898 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:30 crc kubenswrapper[4751]: E0131 14:43:30.406200 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:30 crc kubenswrapper[4751]: E0131 14:43:30.406332 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:30 crc kubenswrapper[4751]: E0131 14:43:30.406445 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:30 crc kubenswrapper[4751]: E0131 14:43:30.406589 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:32 crc kubenswrapper[4751]: I0131 14:43:32.404952 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:32 crc kubenswrapper[4751]: I0131 14:43:32.404964 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:32 crc kubenswrapper[4751]: I0131 14:43:32.405091 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:32 crc kubenswrapper[4751]: I0131 14:43:32.405278 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:32 crc kubenswrapper[4751]: E0131 14:43:32.405290 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:32 crc kubenswrapper[4751]: E0131 14:43:32.405389 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:32 crc kubenswrapper[4751]: E0131 14:43:32.405518 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:32 crc kubenswrapper[4751]: E0131 14:43:32.405857 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:34 crc kubenswrapper[4751]: I0131 14:43:34.405012 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:34 crc kubenswrapper[4751]: I0131 14:43:34.405139 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:34 crc kubenswrapper[4751]: I0131 14:43:34.405139 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:34 crc kubenswrapper[4751]: E0131 14:43:34.406331 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:34 crc kubenswrapper[4751]: I0131 14:43:34.405315 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:34 crc kubenswrapper[4751]: E0131 14:43:34.406759 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:34 crc kubenswrapper[4751]: E0131 14:43:34.406335 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:34 crc kubenswrapper[4751]: E0131 14:43:34.407159 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:35 crc kubenswrapper[4751]: I0131 14:43:35.908330 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rtthp_e7dd989b-33df-4562-a60b-f273428fea3d/kube-multus/1.log" Jan 31 14:43:35 crc kubenswrapper[4751]: I0131 14:43:35.909430 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rtthp_e7dd989b-33df-4562-a60b-f273428fea3d/kube-multus/0.log" Jan 31 14:43:35 crc kubenswrapper[4751]: I0131 14:43:35.909507 4751 generic.go:334] "Generic (PLEG): container finished" podID="e7dd989b-33df-4562-a60b-f273428fea3d" containerID="2bdcbdac0cc4b17e027947c041a0ee4a7d7f549aa6dbe5c07c370ca7c0c50475" exitCode=1 Jan 31 14:43:35 crc kubenswrapper[4751]: I0131 14:43:35.909563 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rtthp" event={"ID":"e7dd989b-33df-4562-a60b-f273428fea3d","Type":"ContainerDied","Data":"2bdcbdac0cc4b17e027947c041a0ee4a7d7f549aa6dbe5c07c370ca7c0c50475"} Jan 31 14:43:35 crc kubenswrapper[4751]: I0131 14:43:35.909623 4751 scope.go:117] "RemoveContainer" containerID="7ef2aa95d0f98ef0706040984786e2b48337984b508447f1b49b155ffb5f4608" Jan 31 14:43:35 crc kubenswrapper[4751]: I0131 14:43:35.911465 4751 scope.go:117] "RemoveContainer" containerID="2bdcbdac0cc4b17e027947c041a0ee4a7d7f549aa6dbe5c07c370ca7c0c50475" Jan 31 14:43:35 crc kubenswrapper[4751]: E0131 14:43:35.911906 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-rtthp_openshift-multus(e7dd989b-33df-4562-a60b-f273428fea3d)\"" pod="openshift-multus/multus-rtthp" podUID="e7dd989b-33df-4562-a60b-f273428fea3d" Jan 31 14:43:35 crc kubenswrapper[4751]: I0131 14:43:35.939158 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-8ph7v" podStartSLOduration=95.939130477 podStartE2EDuration="1m35.939130477s" podCreationTimestamp="2026-01-31 14:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:43:25.893100314 +0000 UTC m=+110.267813209" watchObservedRunningTime="2026-01-31 14:43:35.939130477 +0000 UTC m=+120.313843402" Jan 31 14:43:36 crc kubenswrapper[4751]: E0131 14:43:36.380937 4751 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 31 14:43:36 crc kubenswrapper[4751]: I0131 14:43:36.404910 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:36 crc kubenswrapper[4751]: I0131 14:43:36.404966 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:36 crc kubenswrapper[4751]: I0131 14:43:36.405011 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:36 crc kubenswrapper[4751]: I0131 14:43:36.405060 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:36 crc kubenswrapper[4751]: E0131 14:43:36.407146 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:36 crc kubenswrapper[4751]: E0131 14:43:36.407253 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:36 crc kubenswrapper[4751]: E0131 14:43:36.407438 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:36 crc kubenswrapper[4751]: E0131 14:43:36.407587 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:36 crc kubenswrapper[4751]: E0131 14:43:36.503255 4751 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 14:43:36 crc kubenswrapper[4751]: I0131 14:43:36.915128 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rtthp_e7dd989b-33df-4562-a60b-f273428fea3d/kube-multus/1.log" Jan 31 14:43:38 crc kubenswrapper[4751]: I0131 14:43:38.405362 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:38 crc kubenswrapper[4751]: E0131 14:43:38.405508 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:38 crc kubenswrapper[4751]: I0131 14:43:38.405541 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:38 crc kubenswrapper[4751]: I0131 14:43:38.405580 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:38 crc kubenswrapper[4751]: E0131 14:43:38.405687 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:38 crc kubenswrapper[4751]: E0131 14:43:38.405957 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:38 crc kubenswrapper[4751]: I0131 14:43:38.405382 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:38 crc kubenswrapper[4751]: E0131 14:43:38.406351 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:40 crc kubenswrapper[4751]: I0131 14:43:40.405665 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:40 crc kubenswrapper[4751]: I0131 14:43:40.405745 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:40 crc kubenswrapper[4751]: E0131 14:43:40.406374 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:40 crc kubenswrapper[4751]: I0131 14:43:40.405859 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:40 crc kubenswrapper[4751]: E0131 14:43:40.406685 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:40 crc kubenswrapper[4751]: E0131 14:43:40.406567 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:40 crc kubenswrapper[4751]: I0131 14:43:40.405812 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:40 crc kubenswrapper[4751]: E0131 14:43:40.406988 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:41 crc kubenswrapper[4751]: E0131 14:43:41.504446 4751 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 14:43:42 crc kubenswrapper[4751]: I0131 14:43:42.405615 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:42 crc kubenswrapper[4751]: I0131 14:43:42.405681 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:42 crc kubenswrapper[4751]: I0131 14:43:42.405726 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:42 crc kubenswrapper[4751]: I0131 14:43:42.405772 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:42 crc kubenswrapper[4751]: E0131 14:43:42.406819 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:42 crc kubenswrapper[4751]: E0131 14:43:42.406899 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:42 crc kubenswrapper[4751]: E0131 14:43:42.406972 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:42 crc kubenswrapper[4751]: I0131 14:43:42.407116 4751 scope.go:117] "RemoveContainer" containerID="373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac" Jan 31 14:43:42 crc kubenswrapper[4751]: E0131 14:43:42.407515 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:42 crc kubenswrapper[4751]: I0131 14:43:42.938148 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/3.log" Jan 31 14:43:42 crc kubenswrapper[4751]: I0131 14:43:42.940681 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerStarted","Data":"153c98b7ebe36043f7ae094ec4ae3226c12652e95174c4ff2d00efc441bdb785"} Jan 31 14:43:42 crc kubenswrapper[4751]: I0131 14:43:42.941144 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:43:42 crc kubenswrapper[4751]: I0131 14:43:42.978656 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podStartSLOduration=101.9786403 podStartE2EDuration="1m41.9786403s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:43:42.976955907 +0000 UTC m=+127.351668802" watchObservedRunningTime="2026-01-31 14:43:42.9786403 +0000 UTC m=+127.353353205" Jan 31 14:43:43 crc kubenswrapper[4751]: I0131 14:43:43.391776 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xtn6l"] Jan 31 14:43:43 crc kubenswrapper[4751]: I0131 14:43:43.391878 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:43 crc kubenswrapper[4751]: E0131 14:43:43.391974 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:44 crc kubenswrapper[4751]: I0131 14:43:44.404856 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:44 crc kubenswrapper[4751]: I0131 14:43:44.404952 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:44 crc kubenswrapper[4751]: E0131 14:43:44.405409 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:44 crc kubenswrapper[4751]: I0131 14:43:44.405051 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:44 crc kubenswrapper[4751]: E0131 14:43:44.405517 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:44 crc kubenswrapper[4751]: E0131 14:43:44.405679 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:45 crc kubenswrapper[4751]: I0131 14:43:45.405814 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:45 crc kubenswrapper[4751]: E0131 14:43:45.405954 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:46 crc kubenswrapper[4751]: I0131 14:43:46.405305 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:46 crc kubenswrapper[4751]: I0131 14:43:46.405327 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:46 crc kubenswrapper[4751]: I0131 14:43:46.406462 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:46 crc kubenswrapper[4751]: E0131 14:43:46.406454 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:46 crc kubenswrapper[4751]: E0131 14:43:46.406638 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:46 crc kubenswrapper[4751]: E0131 14:43:46.406727 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:46 crc kubenswrapper[4751]: E0131 14:43:46.506371 4751 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 31 14:43:47 crc kubenswrapper[4751]: I0131 14:43:47.405565 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:47 crc kubenswrapper[4751]: E0131 14:43:47.405711 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:47 crc kubenswrapper[4751]: I0131 14:43:47.406286 4751 scope.go:117] "RemoveContainer" containerID="2bdcbdac0cc4b17e027947c041a0ee4a7d7f549aa6dbe5c07c370ca7c0c50475" Jan 31 14:43:47 crc kubenswrapper[4751]: I0131 14:43:47.960597 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rtthp_e7dd989b-33df-4562-a60b-f273428fea3d/kube-multus/1.log" Jan 31 14:43:47 crc kubenswrapper[4751]: I0131 14:43:47.960990 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rtthp" event={"ID":"e7dd989b-33df-4562-a60b-f273428fea3d","Type":"ContainerStarted","Data":"98a2f0e75ca2c214fba50a70792a41195e5b7e674dbe1ae5b98cd015b7526483"} Jan 31 14:43:48 crc kubenswrapper[4751]: I0131 14:43:48.405208 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:48 crc kubenswrapper[4751]: I0131 14:43:48.405288 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:48 crc kubenswrapper[4751]: I0131 14:43:48.405328 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:48 crc kubenswrapper[4751]: E0131 14:43:48.405911 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:48 crc kubenswrapper[4751]: E0131 14:43:48.405689 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:48 crc kubenswrapper[4751]: E0131 14:43:48.406050 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:49 crc kubenswrapper[4751]: I0131 14:43:49.405641 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:49 crc kubenswrapper[4751]: E0131 14:43:49.405826 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:50 crc kubenswrapper[4751]: I0131 14:43:50.405149 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:50 crc kubenswrapper[4751]: I0131 14:43:50.405147 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:50 crc kubenswrapper[4751]: E0131 14:43:50.405298 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 31 14:43:50 crc kubenswrapper[4751]: E0131 14:43:50.405371 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 31 14:43:50 crc kubenswrapper[4751]: I0131 14:43:50.405168 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:50 crc kubenswrapper[4751]: E0131 14:43:50.405568 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 31 14:43:51 crc kubenswrapper[4751]: I0131 14:43:51.405120 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:51 crc kubenswrapper[4751]: E0131 14:43:51.405254 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-xtn6l" podUID="68aeb9c7-d3c3-4c34-96ab-bb947421c504" Jan 31 14:43:52 crc kubenswrapper[4751]: I0131 14:43:52.405369 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:43:52 crc kubenswrapper[4751]: I0131 14:43:52.405423 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:43:52 crc kubenswrapper[4751]: I0131 14:43:52.405587 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:43:52 crc kubenswrapper[4751]: I0131 14:43:52.409149 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 14:43:52 crc kubenswrapper[4751]: I0131 14:43:52.409209 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 14:43:52 crc kubenswrapper[4751]: I0131 14:43:52.409397 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 14:43:52 crc kubenswrapper[4751]: I0131 14:43:52.409620 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 14:43:53 crc kubenswrapper[4751]: I0131 14:43:53.405058 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:43:53 crc kubenswrapper[4751]: I0131 14:43:53.407943 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 14:43:53 crc kubenswrapper[4751]: I0131 14:43:53.407941 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.824953 4751 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.874323 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sxjf5"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.874853 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.878599 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.879377 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.889251 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.889344 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.890029 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.890518 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.890815 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.891104 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.891464 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.891661 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.891831 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.891929 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.892327 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.892367 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.892632 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.892778 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.893054 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.895395 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.897487 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pmglg"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.897958 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.900147 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.900801 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.901058 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.901343 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.901567 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.906305 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4gqrl"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.907009 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.907586 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-4m7jl"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.913207 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4m7jl" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.922230 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.945411 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.945791 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.946003 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.946831 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.947002 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.947171 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.947421 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.947750 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.948009 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.948455 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8m7f4"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.948623 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.948811 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-db5pg"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.949004 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5f7jc"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.949150 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8m7f4" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.949246 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.948814 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.948877 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.949524 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.950306 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.950606 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.951635 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.952095 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.952259 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.952383 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.955631 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.955831 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.956026 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.956164 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.956244 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.956554 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.957242 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.957481 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.957636 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.957837 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.958000 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.958224 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.958383 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.960379 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8fgxq"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.960706 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.961179 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.961666 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.962250 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8fgxq" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.962736 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.963303 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.966132 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.966447 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.966955 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.967514 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-5hn9b"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.967685 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.967996 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.968184 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.968257 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.968403 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.968483 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.968619 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.968935 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.969186 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-v8p8j"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.976720 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-v8p8j" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.968973 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.971773 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcd7a932-6db9-4cca-b619-852242324725-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4gqrl\" (UID: \"bcd7a932-6db9-4cca-b619-852242324725\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.977360 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1b479ec-e8d7-4fb6-8d0b-9fac28697df7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pmglg\" (UID: \"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.977410 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e14d9fb0-f377-4331-8bc1-8f4017bb95a3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-z9dj7\" (UID: \"e14d9fb0-f377-4331-8bc1-8f4017bb95a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.977469 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e14d9fb0-f377-4331-8bc1-8f4017bb95a3-serving-cert\") pod \"openshift-config-operator-7777fb866f-z9dj7\" (UID: \"e14d9fb0-f377-4331-8bc1-8f4017bb95a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.977500 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-config\") pod \"controller-manager-879f6c89f-sxjf5\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.977542 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x2lr7\" (UniqueName: \"kubernetes.io/projected/e14d9fb0-f377-4331-8bc1-8f4017bb95a3-kube-api-access-x2lr7\") pod \"openshift-config-operator-7777fb866f-z9dj7\" (UID: \"e14d9fb0-f377-4331-8bc1-8f4017bb95a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.977579 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.977611 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stc4r\" (UniqueName: \"kubernetes.io/projected/bcd7a932-6db9-4cca-b619-852242324725-kube-api-access-stc4r\") pod \"machine-api-operator-5694c8668f-4gqrl\" (UID: \"bcd7a932-6db9-4cca-b619-852242324725\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.977644 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bcd7a932-6db9-4cca-b619-852242324725-images\") pod \"machine-api-operator-5694c8668f-4gqrl\" (UID: \"bcd7a932-6db9-4cca-b619-852242324725\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.977701 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-audit-policies\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.977732 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-audit-dir\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.977767 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-client-ca\") pod \"controller-manager-879f6c89f-sxjf5\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.977902 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dz6sg\" (UniqueName: \"kubernetes.io/projected/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-kube-api-access-dz6sg\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.977961 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8p9z\" (UniqueName: \"kubernetes.io/projected/d723501b-bb29-4d60-ad97-239eb749771f-kube-api-access-f8p9z\") pod \"downloads-7954f5f757-4m7jl\" (UID: \"d723501b-bb29-4d60-ad97-239eb749771f\") " pod="openshift-console/downloads-7954f5f757-4m7jl" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.977994 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-encryption-config\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.978034 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1b479ec-e8d7-4fb6-8d0b-9fac28697df7-service-ca-bundle\") pod \"authentication-operator-69f744f599-pmglg\" (UID: \"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.978100 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b479ec-e8d7-4fb6-8d0b-9fac28697df7-config\") pod \"authentication-operator-69f744f599-pmglg\" (UID: \"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.978146 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6lcrc\" (UniqueName: \"kubernetes.io/projected/b1b479ec-e8d7-4fb6-8d0b-9fac28697df7-kube-api-access-6lcrc\") pod \"authentication-operator-69f744f599-pmglg\" (UID: \"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.978181 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-etcd-client\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.978212 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fbprf\" (UniqueName: \"kubernetes.io/projected/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-kube-api-access-fbprf\") pod \"controller-manager-879f6c89f-sxjf5\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.978246 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sxjf5\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.978277 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-serving-cert\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.978310 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-serving-cert\") pod \"controller-manager-879f6c89f-sxjf5\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.978381 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b479ec-e8d7-4fb6-8d0b-9fac28697df7-serving-cert\") pod \"authentication-operator-69f744f599-pmglg\" (UID: \"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.969114 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.978415 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcd7a932-6db9-4cca-b619-852242324725-config\") pod \"machine-api-operator-5694c8668f-4gqrl\" (UID: \"bcd7a932-6db9-4cca-b619-852242324725\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.969239 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.978662 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.969384 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.969431 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.969458 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.969481 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.969514 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.969598 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.969795 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.970994 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.971206 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.971235 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.971260 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.971287 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.971314 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.971355 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.971380 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.971406 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.971463 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.971490 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.980004 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mpbgx"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.980529 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.980859 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hgs4c"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.983200 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.984472 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.984981 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.985013 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.985150 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.985258 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.985295 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.985303 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.985357 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.985442 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.985510 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.996031 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp"] Jan 31 14:43:55 crc kubenswrapper[4751]: I0131 14:43:55.996890 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.005093 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.027860 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.028418 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xr2gt"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.028492 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.028831 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.029162 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.029808 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.029956 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.030142 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.030293 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.030403 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.030534 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.030664 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.030837 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.031412 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-h262z"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.031804 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.032071 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.032715 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.032887 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.033676 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.035657 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.037159 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.037641 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.039552 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.043417 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.045531 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.046434 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.048764 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-b44gm"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.049382 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.049734 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.049832 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.049949 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b44gm" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.050779 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.050877 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.051759 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.052321 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.052767 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.052955 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.057855 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h4drr"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.058290 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.058554 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h4drr" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.058564 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ghblb"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.058645 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.059454 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghblb" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.059499 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5r6kv"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.060559 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.060648 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.060973 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.061143 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.062120 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.070848 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.071041 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.071817 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vbfvz"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.072123 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.072500 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sxjf5"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.072539 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.072553 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-skzbg"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.073296 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.073376 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-skzbg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.073409 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vbfvz" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.074560 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pmglg"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.075768 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4gqrl"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.077189 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.077735 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.078756 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079430 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcd7a932-6db9-4cca-b619-852242324725-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4gqrl\" (UID: \"bcd7a932-6db9-4cca-b619-852242324725\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079469 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-etcd-ca\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079486 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-etcd-service-ca\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079502 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57wkh\" (UniqueName: \"kubernetes.io/projected/6a74f65d-f8d2-41af-8469-6f8d020b41de-kube-api-access-57wkh\") pod \"console-operator-58897d9998-db5pg\" (UID: \"6a74f65d-f8d2-41af-8469-6f8d020b41de\") " pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079569 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1b479ec-e8d7-4fb6-8d0b-9fac28697df7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pmglg\" (UID: \"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079585 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e14d9fb0-f377-4331-8bc1-8f4017bb95a3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-z9dj7\" (UID: \"e14d9fb0-f377-4331-8bc1-8f4017bb95a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079603 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079622 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079639 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079654 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b9tnj\" (UniqueName: \"kubernetes.io/projected/b9810521-7440-49d4-bf04-7dbe3324cc5b-kube-api-access-b9tnj\") pod \"multus-admission-controller-857f4d67dd-v8p8j\" (UID: \"b9810521-7440-49d4-bf04-7dbe3324cc5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v8p8j" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079670 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a74f65d-f8d2-41af-8469-6f8d020b41de-trusted-ca\") pod \"console-operator-58897d9998-db5pg\" (UID: \"6a74f65d-f8d2-41af-8469-6f8d020b41de\") " pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079685 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e14d9fb0-f377-4331-8bc1-8f4017bb95a3-serving-cert\") pod \"openshift-config-operator-7777fb866f-z9dj7\" (UID: \"e14d9fb0-f377-4331-8bc1-8f4017bb95a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079701 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-config\") pod \"controller-manager-879f6c89f-sxjf5\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079717 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a74f65d-f8d2-41af-8469-6f8d020b41de-config\") pod \"console-operator-58897d9998-db5pg\" (UID: \"6a74f65d-f8d2-41af-8469-6f8d020b41de\") " pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079734 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x2lr7\" (UniqueName: \"kubernetes.io/projected/e14d9fb0-f377-4331-8bc1-8f4017bb95a3-kube-api-access-x2lr7\") pod \"openshift-config-operator-7777fb866f-z9dj7\" (UID: \"e14d9fb0-f377-4331-8bc1-8f4017bb95a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079751 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-etcd-client\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079767 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079784 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079799 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stc4r\" (UniqueName: \"kubernetes.io/projected/bcd7a932-6db9-4cca-b619-852242324725-kube-api-access-stc4r\") pod \"machine-api-operator-5694c8668f-4gqrl\" (UID: \"bcd7a932-6db9-4cca-b619-852242324725\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079813 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/802d5225-ef3f-485c-bb85-3c0f18e42952-audit-dir\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079831 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bcd7a932-6db9-4cca-b619-852242324725-images\") pod \"machine-api-operator-5694c8668f-4gqrl\" (UID: \"bcd7a932-6db9-4cca-b619-852242324725\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079845 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-serving-cert\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079871 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f13811e7-14eb-4a17-90a1-345619f9fb29-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czqdr\" (UID: \"f13811e7-14eb-4a17-90a1-345619f9fb29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079889 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-audit-policies\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079909 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-audit-dir\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079928 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-client-ca\") pod \"controller-manager-879f6c89f-sxjf5\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079952 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dz6sg\" (UniqueName: \"kubernetes.io/projected/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-kube-api-access-dz6sg\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079967 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079981 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f13811e7-14eb-4a17-90a1-345619f9fb29-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czqdr\" (UID: \"f13811e7-14eb-4a17-90a1-345619f9fb29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.079999 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/466718f1-f118-4f13-a983-14060aef09e6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-drp8h\" (UID: \"466718f1-f118-4f13-a983-14060aef09e6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.080819 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-audit-policies\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.080395 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-audit-dir\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.080249 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/e14d9fb0-f377-4331-8bc1-8f4017bb95a3-available-featuregates\") pod \"openshift-config-operator-7777fb866f-z9dj7\" (UID: \"e14d9fb0-f377-4331-8bc1-8f4017bb95a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.080978 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p8894\" (UniqueName: \"kubernetes.io/projected/466718f1-f118-4f13-a983-14060aef09e6-kube-api-access-p8894\") pod \"ingress-operator-5b745b69d9-drp8h\" (UID: \"466718f1-f118-4f13-a983-14060aef09e6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081010 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081101 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8p9z\" (UniqueName: \"kubernetes.io/projected/d723501b-bb29-4d60-ad97-239eb749771f-kube-api-access-f8p9z\") pod \"downloads-7954f5f757-4m7jl\" (UID: \"d723501b-bb29-4d60-ad97-239eb749771f\") " pod="openshift-console/downloads-7954f5f757-4m7jl" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081126 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-encryption-config\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081142 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1b479ec-e8d7-4fb6-8d0b-9fac28697df7-service-ca-bundle\") pod \"authentication-operator-69f744f599-pmglg\" (UID: \"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aceeef0f-cb36-43d6-8e09-35949fe73911-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w4lzx\" (UID: \"aceeef0f-cb36-43d6-8e09-35949fe73911\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081199 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b479ec-e8d7-4fb6-8d0b-9fac28697df7-config\") pod \"authentication-operator-69f744f599-pmglg\" (UID: \"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081220 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hhrf\" (UniqueName: \"kubernetes.io/projected/aceeef0f-cb36-43d6-8e09-35949fe73911-kube-api-access-7hhrf\") pod \"machine-config-operator-74547568cd-w4lzx\" (UID: \"aceeef0f-cb36-43d6-8e09-35949fe73911\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081240 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5246\" (UniqueName: \"kubernetes.io/projected/4ba2ceb2-34e1-487c-9b13-0a480d6cc521-kube-api-access-h5246\") pod \"dns-operator-744455d44c-8fgxq\" (UID: \"4ba2ceb2-34e1-487c-9b13-0a480d6cc521\") " pod="openshift-dns-operator/dns-operator-744455d44c-8fgxq" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081258 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f13811e7-14eb-4a17-90a1-345619f9fb29-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czqdr\" (UID: \"f13811e7-14eb-4a17-90a1-345619f9fb29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081281 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6lcrc\" (UniqueName: \"kubernetes.io/projected/b1b479ec-e8d7-4fb6-8d0b-9fac28697df7-kube-api-access-6lcrc\") pod \"authentication-operator-69f744f599-pmglg\" (UID: \"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081302 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/466718f1-f118-4f13-a983-14060aef09e6-metrics-tls\") pod \"ingress-operator-5b745b69d9-drp8h\" (UID: \"466718f1-f118-4f13-a983-14060aef09e6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081324 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-etcd-client\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081345 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fbprf\" (UniqueName: \"kubernetes.io/projected/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-kube-api-access-fbprf\") pod \"controller-manager-879f6c89f-sxjf5\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081364 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aceeef0f-cb36-43d6-8e09-35949fe73911-proxy-tls\") pod \"machine-config-operator-74547568cd-w4lzx\" (UID: \"aceeef0f-cb36-43d6-8e09-35949fe73911\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081377 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-client-ca\") pod \"controller-manager-879f6c89f-sxjf5\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081382 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sxjf5\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081026 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1b479ec-e8d7-4fb6-8d0b-9fac28697df7-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-pmglg\" (UID: \"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081458 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081523 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-config\") pod \"controller-manager-879f6c89f-sxjf5\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081532 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/bcd7a932-6db9-4cca-b619-852242324725-images\") pod \"machine-api-operator-5694c8668f-4gqrl\" (UID: \"bcd7a932-6db9-4cca-b619-852242324725\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081673 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-serving-cert\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.081747 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aceeef0f-cb36-43d6-8e09-35949fe73911-images\") pod \"machine-config-operator-74547568cd-w4lzx\" (UID: \"aceeef0f-cb36-43d6-8e09-35949fe73911\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.083656 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b1b479ec-e8d7-4fb6-8d0b-9fac28697df7-service-ca-bundle\") pod \"authentication-operator-69f744f599-pmglg\" (UID: \"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.084183 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.089200 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-sxjf5\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.090881 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwsmj\" (UniqueName: \"kubernetes.io/projected/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-kube-api-access-xwsmj\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.090966 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-serving-cert\") pod \"controller-manager-879f6c89f-sxjf5\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.091025 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-audit-policies\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.091515 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b479ec-e8d7-4fb6-8d0b-9fac28697df7-serving-cert\") pod \"authentication-operator-69f744f599-pmglg\" (UID: \"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.091686 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcd7a932-6db9-4cca-b619-852242324725-config\") pod \"machine-api-operator-5694c8668f-4gqrl\" (UID: \"bcd7a932-6db9-4cca-b619-852242324725\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.093434 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b1b479ec-e8d7-4fb6-8d0b-9fac28697df7-config\") pod \"authentication-operator-69f744f599-pmglg\" (UID: \"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.093919 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-serving-cert\") pod \"controller-manager-879f6c89f-sxjf5\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.094699 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bcd7a932-6db9-4cca-b619-852242324725-config\") pod \"machine-api-operator-5694c8668f-4gqrl\" (UID: \"bcd7a932-6db9-4cca-b619-852242324725\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.094701 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-etcd-client\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.094790 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.094852 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnk4h\" (UniqueName: \"kubernetes.io/projected/802d5225-ef3f-485c-bb85-3c0f18e42952-kube-api-access-rnk4h\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.095316 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.095422 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-config\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.095460 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.095509 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.095532 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a74f65d-f8d2-41af-8469-6f8d020b41de-serving-cert\") pod \"console-operator-58897d9998-db5pg\" (UID: \"6a74f65d-f8d2-41af-8469-6f8d020b41de\") " pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.095523 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b1b479ec-e8d7-4fb6-8d0b-9fac28697df7-serving-cert\") pod \"authentication-operator-69f744f599-pmglg\" (UID: \"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.095550 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.095593 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/466718f1-f118-4f13-a983-14060aef09e6-trusted-ca\") pod \"ingress-operator-5b745b69d9-drp8h\" (UID: \"466718f1-f118-4f13-a983-14060aef09e6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.095613 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ba2ceb2-34e1-487c-9b13-0a480d6cc521-metrics-tls\") pod \"dns-operator-744455d44c-8fgxq\" (UID: \"4ba2ceb2-34e1-487c-9b13-0a480d6cc521\") " pod="openshift-dns-operator/dns-operator-744455d44c-8fgxq" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.095634 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.095653 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b9810521-7440-49d4-bf04-7dbe3324cc5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-v8p8j\" (UID: \"b9810521-7440-49d4-bf04-7dbe3324cc5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v8p8j" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.096736 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.096748 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e14d9fb0-f377-4331-8bc1-8f4017bb95a3-serving-cert\") pod \"openshift-config-operator-7777fb866f-z9dj7\" (UID: \"e14d9fb0-f377-4331-8bc1-8f4017bb95a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.098412 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-encryption-config\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.098619 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xr2gt"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.100994 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-serving-cert\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.098850 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-h262z"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.101527 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/bcd7a932-6db9-4cca-b619-852242324725-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-4gqrl\" (UID: \"bcd7a932-6db9-4cca-b619-852242324725\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.102329 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.104903 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-v8p8j"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.105654 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8m7f4"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.108983 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-db5pg"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.110103 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8fgxq"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.111663 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.113608 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5f7jc"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.114579 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.115838 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.117144 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.117555 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.118811 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hgs4c"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.120187 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h4drr"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.120208 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-b44gm"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.123172 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.124737 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vbfvz"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.125622 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4m7jl"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.127020 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.128142 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.129119 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5r6kv"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.130083 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.131009 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-x4njd"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.131516 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-x4njd" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.132816 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x4rnh"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.133631 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mpbgx"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.133683 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.133920 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.136107 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x4rnh"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.137097 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.137206 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.138106 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-skzbg"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.139053 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.140089 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.141027 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ghblb"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.141998 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-qdsgb"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.142582 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qdsgb" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.142964 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qdsgb"] Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.157019 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.177229 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196271 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196314 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196366 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196399 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b9tnj\" (UniqueName: \"kubernetes.io/projected/b9810521-7440-49d4-bf04-7dbe3324cc5b-kube-api-access-b9tnj\") pod \"multus-admission-controller-857f4d67dd-v8p8j\" (UID: \"b9810521-7440-49d4-bf04-7dbe3324cc5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v8p8j" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196426 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a74f65d-f8d2-41af-8469-6f8d020b41de-trusted-ca\") pod \"console-operator-58897d9998-db5pg\" (UID: \"6a74f65d-f8d2-41af-8469-6f8d020b41de\") " pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196456 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196479 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a74f65d-f8d2-41af-8469-6f8d020b41de-config\") pod \"console-operator-58897d9998-db5pg\" (UID: \"6a74f65d-f8d2-41af-8469-6f8d020b41de\") " pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196510 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-etcd-client\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196540 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/802d5225-ef3f-485c-bb85-3c0f18e42952-audit-dir\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196563 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-serving-cert\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196584 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/802d5225-ef3f-485c-bb85-3c0f18e42952-audit-dir\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196598 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f13811e7-14eb-4a17-90a1-345619f9fb29-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czqdr\" (UID: \"f13811e7-14eb-4a17-90a1-345619f9fb29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196658 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196702 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196728 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f13811e7-14eb-4a17-90a1-345619f9fb29-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czqdr\" (UID: \"f13811e7-14eb-4a17-90a1-345619f9fb29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196754 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/466718f1-f118-4f13-a983-14060aef09e6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-drp8h\" (UID: \"466718f1-f118-4f13-a983-14060aef09e6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196784 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p8894\" (UniqueName: \"kubernetes.io/projected/466718f1-f118-4f13-a983-14060aef09e6-kube-api-access-p8894\") pod \"ingress-operator-5b745b69d9-drp8h\" (UID: \"466718f1-f118-4f13-a983-14060aef09e6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196824 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aceeef0f-cb36-43d6-8e09-35949fe73911-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w4lzx\" (UID: \"aceeef0f-cb36-43d6-8e09-35949fe73911\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196847 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f13811e7-14eb-4a17-90a1-345619f9fb29-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czqdr\" (UID: \"f13811e7-14eb-4a17-90a1-345619f9fb29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196883 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hhrf\" (UniqueName: \"kubernetes.io/projected/aceeef0f-cb36-43d6-8e09-35949fe73911-kube-api-access-7hhrf\") pod \"machine-config-operator-74547568cd-w4lzx\" (UID: \"aceeef0f-cb36-43d6-8e09-35949fe73911\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196908 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5246\" (UniqueName: \"kubernetes.io/projected/4ba2ceb2-34e1-487c-9b13-0a480d6cc521-kube-api-access-h5246\") pod \"dns-operator-744455d44c-8fgxq\" (UID: \"4ba2ceb2-34e1-487c-9b13-0a480d6cc521\") " pod="openshift-dns-operator/dns-operator-744455d44c-8fgxq" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196938 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/466718f1-f118-4f13-a983-14060aef09e6-metrics-tls\") pod \"ingress-operator-5b745b69d9-drp8h\" (UID: \"466718f1-f118-4f13-a983-14060aef09e6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196972 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aceeef0f-cb36-43d6-8e09-35949fe73911-proxy-tls\") pod \"machine-config-operator-74547568cd-w4lzx\" (UID: \"aceeef0f-cb36-43d6-8e09-35949fe73911\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.196995 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197025 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aceeef0f-cb36-43d6-8e09-35949fe73911-images\") pod \"machine-config-operator-74547568cd-w4lzx\" (UID: \"aceeef0f-cb36-43d6-8e09-35949fe73911\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197045 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xwsmj\" (UniqueName: \"kubernetes.io/projected/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-kube-api-access-xwsmj\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197063 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-audit-policies\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197164 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnk4h\" (UniqueName: \"kubernetes.io/projected/802d5225-ef3f-485c-bb85-3c0f18e42952-kube-api-access-rnk4h\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197186 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-config\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197229 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6a74f65d-f8d2-41af-8469-6f8d020b41de-config\") pod \"console-operator-58897d9998-db5pg\" (UID: \"6a74f65d-f8d2-41af-8469-6f8d020b41de\") " pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197234 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197335 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197356 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a74f65d-f8d2-41af-8469-6f8d020b41de-serving-cert\") pod \"console-operator-58897d9998-db5pg\" (UID: \"6a74f65d-f8d2-41af-8469-6f8d020b41de\") " pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197376 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197392 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197411 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/466718f1-f118-4f13-a983-14060aef09e6-trusted-ca\") pod \"ingress-operator-5b745b69d9-drp8h\" (UID: \"466718f1-f118-4f13-a983-14060aef09e6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197427 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ba2ceb2-34e1-487c-9b13-0a480d6cc521-metrics-tls\") pod \"dns-operator-744455d44c-8fgxq\" (UID: \"4ba2ceb2-34e1-487c-9b13-0a480d6cc521\") " pod="openshift-dns-operator/dns-operator-744455d44c-8fgxq" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197444 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b9810521-7440-49d4-bf04-7dbe3324cc5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-v8p8j\" (UID: \"b9810521-7440-49d4-bf04-7dbe3324cc5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v8p8j" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197475 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-etcd-ca\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.197742 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.198268 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/aceeef0f-cb36-43d6-8e09-35949fe73911-auth-proxy-config\") pod \"machine-config-operator-74547568cd-w4lzx\" (UID: \"aceeef0f-cb36-43d6-8e09-35949fe73911\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.198355 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-etcd-service-ca\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.198388 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57wkh\" (UniqueName: \"kubernetes.io/projected/6a74f65d-f8d2-41af-8469-6f8d020b41de-kube-api-access-57wkh\") pod \"console-operator-58897d9998-db5pg\" (UID: \"6a74f65d-f8d2-41af-8469-6f8d020b41de\") " pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.198645 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-etcd-ca\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.198930 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6a74f65d-f8d2-41af-8469-6f8d020b41de-trusted-ca\") pod \"console-operator-58897d9998-db5pg\" (UID: \"6a74f65d-f8d2-41af-8469-6f8d020b41de\") " pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.199115 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-config\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.199140 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-etcd-service-ca\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.200635 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-serving-cert\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.200793 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/4ba2ceb2-34e1-487c-9b13-0a480d6cc521-metrics-tls\") pod \"dns-operator-744455d44c-8fgxq\" (UID: \"4ba2ceb2-34e1-487c-9b13-0a480d6cc521\") " pod="openshift-dns-operator/dns-operator-744455d44c-8fgxq" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.200984 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-etcd-client\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.201365 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/b9810521-7440-49d4-bf04-7dbe3324cc5b-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-v8p8j\" (UID: \"b9810521-7440-49d4-bf04-7dbe3324cc5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v8p8j" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.201936 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6a74f65d-f8d2-41af-8469-6f8d020b41de-serving-cert\") pod \"console-operator-58897d9998-db5pg\" (UID: \"6a74f65d-f8d2-41af-8469-6f8d020b41de\") " pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.210821 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.226005 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.234490 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.241957 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.251715 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.257057 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.262263 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.276682 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.280345 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.297236 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.301381 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.317860 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.320540 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.337847 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.348579 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.357639 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.359180 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-audit-policies\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.376810 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.387931 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.396865 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.398759 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.422333 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.428609 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.437740 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.457469 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.463487 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/466718f1-f118-4f13-a983-14060aef09e6-metrics-tls\") pod \"ingress-operator-5b745b69d9-drp8h\" (UID: \"466718f1-f118-4f13-a983-14060aef09e6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.477247 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.504248 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.511253 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/466718f1-f118-4f13-a983-14060aef09e6-trusted-ca\") pod \"ingress-operator-5b745b69d9-drp8h\" (UID: \"466718f1-f118-4f13-a983-14060aef09e6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.516981 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.537639 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.556712 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.577184 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.598047 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.617725 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.644862 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.657268 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.676985 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.698124 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.717750 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.738022 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.757697 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.762247 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f13811e7-14eb-4a17-90a1-345619f9fb29-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czqdr\" (UID: \"f13811e7-14eb-4a17-90a1-345619f9fb29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.777978 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.788174 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f13811e7-14eb-4a17-90a1-345619f9fb29-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czqdr\" (UID: \"f13811e7-14eb-4a17-90a1-345619f9fb29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.797334 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.817154 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.819148 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/aceeef0f-cb36-43d6-8e09-35949fe73911-images\") pod \"machine-config-operator-74547568cd-w4lzx\" (UID: \"aceeef0f-cb36-43d6-8e09-35949fe73911\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.862775 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.873231 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/aceeef0f-cb36-43d6-8e09-35949fe73911-proxy-tls\") pod \"machine-config-operator-74547568cd-w4lzx\" (UID: \"aceeef0f-cb36-43d6-8e09-35949fe73911\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.877551 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.917809 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.938264 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.957915 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.978762 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 14:43:56 crc kubenswrapper[4751]: I0131 14:43:56.997344 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.018539 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.038534 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.057647 4751 request.go:700] Waited for 1.004990057s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/secrets?fieldSelector=metadata.name%3Dcluster-image-registry-operator-dockercfg-m4qtx&limit=500&resourceVersion=0 Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.059765 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.079286 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.097961 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.118128 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.139054 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.158549 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.177416 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.198925 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.218625 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.237557 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.258594 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.277849 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.301698 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.318054 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.338020 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.357899 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.379013 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.397732 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.417655 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.438393 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.458468 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.478458 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.498297 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.530293 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.538292 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.558310 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.577961 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.599280 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.618559 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.639128 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.658632 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.678163 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.698837 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.718732 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.737788 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.758635 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.806852 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dz6sg\" (UniqueName: \"kubernetes.io/projected/5f75ab4e-45c1-4ed9-b966-afa91dbc88a6-kube-api-access-dz6sg\") pod \"apiserver-7bbb656c7d-wdsj4\" (UID: \"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.824269 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x2lr7\" (UniqueName: \"kubernetes.io/projected/e14d9fb0-f377-4331-8bc1-8f4017bb95a3-kube-api-access-x2lr7\") pod \"openshift-config-operator-7777fb866f-z9dj7\" (UID: \"e14d9fb0-f377-4331-8bc1-8f4017bb95a3\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.849594 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stc4r\" (UniqueName: \"kubernetes.io/projected/bcd7a932-6db9-4cca-b619-852242324725-kube-api-access-stc4r\") pod \"machine-api-operator-5694c8668f-4gqrl\" (UID: \"bcd7a932-6db9-4cca-b619-852242324725\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.852010 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.865522 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6lcrc\" (UniqueName: \"kubernetes.io/projected/b1b479ec-e8d7-4fb6-8d0b-9fac28697df7-kube-api-access-6lcrc\") pod \"authentication-operator-69f744f599-pmglg\" (UID: \"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.886323 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8p9z\" (UniqueName: \"kubernetes.io/projected/d723501b-bb29-4d60-ad97-239eb749771f-kube-api-access-f8p9z\") pod \"downloads-7954f5f757-4m7jl\" (UID: \"d723501b-bb29-4d60-ad97-239eb749771f\") " pod="openshift-console/downloads-7954f5f757-4m7jl" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.898966 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.904563 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fbprf\" (UniqueName: \"kubernetes.io/projected/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-kube-api-access-fbprf\") pod \"controller-manager-879f6c89f-sxjf5\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.918817 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.937552 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.958679 4751 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.977858 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 14:43:57 crc kubenswrapper[4751]: I0131 14:43:57.998679 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.013174 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.017977 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.028422 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.039318 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.057856 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.072721 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.076297 4751 request.go:700] Waited for 1.933476135s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-canary/secrets?fieldSelector=metadata.name%3Ddefault-dockercfg-2llfx&limit=500&resourceVersion=0 Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.078964 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.121275 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b9tnj\" (UniqueName: \"kubernetes.io/projected/b9810521-7440-49d4-bf04-7dbe3324cc5b-kube-api-access-b9tnj\") pod \"multus-admission-controller-857f4d67dd-v8p8j\" (UID: \"b9810521-7440-49d4-bf04-7dbe3324cc5b\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-v8p8j" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.132516 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.144473 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/466718f1-f118-4f13-a983-14060aef09e6-bound-sa-token\") pod \"ingress-operator-5b745b69d9-drp8h\" (UID: \"466718f1-f118-4f13-a983-14060aef09e6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.154301 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p8894\" (UniqueName: \"kubernetes.io/projected/466718f1-f118-4f13-a983-14060aef09e6-kube-api-access-p8894\") pod \"ingress-operator-5b745b69d9-drp8h\" (UID: \"466718f1-f118-4f13-a983-14060aef09e6\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.165627 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-4m7jl" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.182177 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f13811e7-14eb-4a17-90a1-345619f9fb29-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-czqdr\" (UID: \"f13811e7-14eb-4a17-90a1-345619f9fb29\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.204580 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hhrf\" (UniqueName: \"kubernetes.io/projected/aceeef0f-cb36-43d6-8e09-35949fe73911-kube-api-access-7hhrf\") pod \"machine-config-operator-74547568cd-w4lzx\" (UID: \"aceeef0f-cb36-43d6-8e09-35949fe73911\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.210463 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5246\" (UniqueName: \"kubernetes.io/projected/4ba2ceb2-34e1-487c-9b13-0a480d6cc521-kube-api-access-h5246\") pod \"dns-operator-744455d44c-8fgxq\" (UID: \"4ba2ceb2-34e1-487c-9b13-0a480d6cc521\") " pod="openshift-dns-operator/dns-operator-744455d44c-8fgxq" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.236788 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwsmj\" (UniqueName: \"kubernetes.io/projected/2d8dfb13-f0a0-465c-821d-95f0df0a98cf-kube-api-access-xwsmj\") pod \"etcd-operator-b45778765-hgs4c\" (UID: \"2d8dfb13-f0a0-465c-821d-95f0df0a98cf\") " pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.247218 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-4gqrl"] Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.250993 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-8fgxq" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.254251 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnk4h\" (UniqueName: \"kubernetes.io/projected/802d5225-ef3f-485c-bb85-3c0f18e42952-kube-api-access-rnk4h\") pod \"oauth-openshift-558db77b4-xr2gt\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.269235 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-v8p8j" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.271288 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57wkh\" (UniqueName: \"kubernetes.io/projected/6a74f65d-f8d2-41af-8469-6f8d020b41de-kube-api-access-57wkh\") pod \"console-operator-58897d9998-db5pg\" (UID: \"6a74f65d-f8d2-41af-8469-6f8d020b41de\") " pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.281678 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.281911 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4"] Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.295449 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:43:58 crc kubenswrapper[4751]: W0131 14:43:58.301062 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbcd7a932_6db9_4cca_b619_852242324725.slice/crio-365a012e0dc256fca522addd41c83a63fb808621ec56987dcd5bb062ddda6006 WatchSource:0}: Error finding container 365a012e0dc256fca522addd41c83a63fb808621ec56987dcd5bb062ddda6006: Status 404 returned error can't find the container with id 365a012e0dc256fca522addd41c83a63fb808621ec56987dcd5bb062ddda6006 Jan 31 14:43:58 crc kubenswrapper[4751]: W0131 14:43:58.302736 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5f75ab4e_45c1_4ed9_b966_afa91dbc88a6.slice/crio-ab97991b8db78d8e752d40d1152ee78f2b57fd461dd5402070894dd0d8f788b9 WatchSource:0}: Error finding container ab97991b8db78d8e752d40d1152ee78f2b57fd461dd5402070894dd0d8f788b9: Status 404 returned error can't find the container with id ab97991b8db78d8e752d40d1152ee78f2b57fd461dd5402070894dd0d8f788b9 Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.317950 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.337342 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e2930a-5ae3-4171-a3dd-e5eea62ef157-serving-cert\") pod \"route-controller-manager-6576b87f9c-7762w\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.337386 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4e18e163-6cf0-48ef-9a6f-90cbece870b0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.337410 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-bound-sa-token\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.337430 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-trusted-ca-bundle\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.337448 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89314349-bbc8-4886-b93b-51358e4e71b0-etcd-client\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.337481 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e18e163-6cf0-48ef-9a6f-90cbece870b0-trusted-ca\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.337505 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwdnj\" (UniqueName: \"kubernetes.io/projected/ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4-kube-api-access-vwdnj\") pod \"openshift-controller-manager-operator-756b6f6bc6-s7gwp\" (UID: \"ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.337532 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/89314349-bbc8-4886-b93b-51358e4e71b0-audit\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.337930 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4e18e163-6cf0-48ef-9a6f-90cbece870b0-registry-certificates\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.337965 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89314349-bbc8-4886-b93b-51358e4e71b0-audit-dir\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.337989 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d7ae01-24ad-448d-ae7c-10df353833f4-config\") pod \"kube-apiserver-operator-766d6c64bb-cp47m\" (UID: \"17d7ae01-24ad-448d-ae7c-10df353833f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.338046 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89314349-bbc8-4886-b93b-51358e4e71b0-etcd-serving-ca\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.338101 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01ff1674-4e01-4cdc-aea3-1e91a6a389e3-service-ca-bundle\") pod \"router-default-5444994796-5hn9b\" (UID: \"01ff1674-4e01-4cdc-aea3-1e91a6a389e3\") " pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.338465 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4b193a-a01b-440a-a94a-55c4b5f06586-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nz99n\" (UID: \"4c4b193a-a01b-440a-a94a-55c4b5f06586\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.338236 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.338522 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-s7gwp\" (UID: \"ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.338718 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89314349-bbc8-4886-b93b-51358e4e71b0-config\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.338775 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-service-ca\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.338807 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7sjfc\" (UniqueName: \"kubernetes.io/projected/853ca050-beae-4089-a5df-9556eeda508b-kube-api-access-7sjfc\") pod \"cluster-samples-operator-665b6dd947-8m7f4\" (UID: \"853ca050-beae-4089-a5df-9556eeda508b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8m7f4" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.338845 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4b193a-a01b-440a-a94a-55c4b5f06586-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nz99n\" (UID: \"4c4b193a-a01b-440a-a94a-55c4b5f06586\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.338883 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/853ca050-beae-4089-a5df-9556eeda508b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8m7f4\" (UID: \"853ca050-beae-4089-a5df-9556eeda508b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8m7f4" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.338926 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/01ff1674-4e01-4cdc-aea3-1e91a6a389e3-stats-auth\") pod \"router-default-5444994796-5hn9b\" (UID: \"01ff1674-4e01-4cdc-aea3-1e91a6a389e3\") " pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.338965 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-console-oauth-config\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.339092 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e2930a-5ae3-4171-a3dd-e5eea62ef157-config\") pod \"route-controller-manager-6576b87f9c-7762w\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.339146 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n8662\" (UniqueName: \"kubernetes.io/projected/01ff1674-4e01-4cdc-aea3-1e91a6a389e3-kube-api-access-n8662\") pod \"router-default-5444994796-5hn9b\" (UID: \"01ff1674-4e01-4cdc-aea3-1e91a6a389e3\") " pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.339185 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17d7ae01-24ad-448d-ae7c-10df353833f4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cp47m\" (UID: \"17d7ae01-24ad-448d-ae7c-10df353833f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.339238 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/01ff1674-4e01-4cdc-aea3-1e91a6a389e3-default-certificate\") pod \"router-default-5444994796-5hn9b\" (UID: \"01ff1674-4e01-4cdc-aea3-1e91a6a389e3\") " pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.339287 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/89314349-bbc8-4886-b93b-51358e4e71b0-image-import-ca\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.339321 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wswl\" (UniqueName: \"kubernetes.io/projected/4c4b193a-a01b-440a-a94a-55c4b5f06586-kube-api-access-9wswl\") pod \"openshift-apiserver-operator-796bbdcf4f-nz99n\" (UID: \"4c4b193a-a01b-440a-a94a-55c4b5f06586\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.339345 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89314349-bbc8-4886-b93b-51358e4e71b0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.339415 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4e18e163-6cf0-48ef-9a6f-90cbece870b0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.339437 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-console-config\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.339480 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lvlhx\" (UniqueName: \"kubernetes.io/projected/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-kube-api-access-lvlhx\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.339516 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17d7ae01-24ad-448d-ae7c-10df353833f4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cp47m\" (UID: \"17d7ae01-24ad-448d-ae7c-10df353833f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.340333 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0058e7f4-92db-444d-a979-2880c3f83442-machine-approver-tls\") pod \"machine-approver-56656f9798-cc6m2\" (UID: \"0058e7f4-92db-444d-a979-2880c3f83442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.340397 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e2930a-5ae3-4171-a3dd-e5eea62ef157-client-ca\") pod \"route-controller-manager-6576b87f9c-7762w\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.340452 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9th82\" (UniqueName: \"kubernetes.io/projected/d031fa1b-4d52-47d7-8c39-5fa21fb6c244-kube-api-access-9th82\") pod \"machine-config-controller-84d6567774-xv2tk\" (UID: \"d031fa1b-4d52-47d7-8c39-5fa21fb6c244\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.340486 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89314349-bbc8-4886-b93b-51358e4e71b0-serving-cert\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.340509 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppx8c\" (UniqueName: \"kubernetes.io/projected/0058e7f4-92db-444d-a979-2880c3f83442-kube-api-access-ppx8c\") pod \"machine-approver-56656f9798-cc6m2\" (UID: \"0058e7f4-92db-444d-a979-2880c3f83442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.340571 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.340592 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtk42\" (UniqueName: \"kubernetes.io/projected/89314349-bbc8-4886-b93b-51358e4e71b0-kube-api-access-mtk42\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: E0131 14:43:58.342489 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:43:58.842463596 +0000 UTC m=+143.217176571 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.340621 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-oauth-serving-cert\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.343243 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-registry-tls\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.343302 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89314349-bbc8-4886-b93b-51358e4e71b0-node-pullsecrets\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.343335 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ksqdw\" (UniqueName: \"kubernetes.io/projected/84e2930a-5ae3-4171-a3dd-e5eea62ef157-kube-api-access-ksqdw\") pod \"route-controller-manager-6576b87f9c-7762w\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.343371 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0058e7f4-92db-444d-a979-2880c3f83442-auth-proxy-config\") pod \"machine-approver-56656f9798-cc6m2\" (UID: \"0058e7f4-92db-444d-a979-2880c3f83442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.343457 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d031fa1b-4d52-47d7-8c39-5fa21fb6c244-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xv2tk\" (UID: \"d031fa1b-4d52-47d7-8c39-5fa21fb6c244\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.343492 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llx87\" (UniqueName: \"kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-kube-api-access-llx87\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.343638 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01ff1674-4e01-4cdc-aea3-1e91a6a389e3-metrics-certs\") pod \"router-default-5444994796-5hn9b\" (UID: \"01ff1674-4e01-4cdc-aea3-1e91a6a389e3\") " pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.345223 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.343667 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-console-serving-cert\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.346395 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0058e7f4-92db-444d-a979-2880c3f83442-config\") pod \"machine-approver-56656f9798-cc6m2\" (UID: \"0058e7f4-92db-444d-a979-2880c3f83442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.346432 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89314349-bbc8-4886-b93b-51358e4e71b0-encryption-config\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.346467 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d031fa1b-4d52-47d7-8c39-5fa21fb6c244-proxy-tls\") pod \"machine-config-controller-84d6567774-xv2tk\" (UID: \"d031fa1b-4d52-47d7-8c39-5fa21fb6c244\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.346491 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-s7gwp\" (UID: \"ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.354275 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7"] Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.389509 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-pmglg"] Jan 31 14:43:58 crc kubenswrapper[4751]: W0131 14:43:58.422253 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode14d9fb0_f377_4331_8bc1_8f4017bb95a3.slice/crio-b04fce866f961463553a14856d3791616fdd2474de989b38b991dcb6c8a6211a WatchSource:0}: Error finding container b04fce866f961463553a14856d3791616fdd2474de989b38b991dcb6c8a6211a: Status 404 returned error can't find the container with id b04fce866f961463553a14856d3791616fdd2474de989b38b991dcb6c8a6211a Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.447723 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:43:58 crc kubenswrapper[4751]: E0131 14:43:58.447956 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:43:58.947912401 +0000 UTC m=+143.322625276 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448015 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448039 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtk42\" (UniqueName: \"kubernetes.io/projected/89314349-bbc8-4886-b93b-51358e4e71b0-kube-api-access-mtk42\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448059 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/89a244ab-c405-48aa-893f-f50995384ede-webhook-cert\") pod \"packageserver-d55dfcdfc-7hjp9\" (UID: \"89a244ab-c405-48aa-893f-f50995384ede\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448092 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-registry-tls\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448107 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-oauth-serving-cert\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448123 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f47a4e08-e21f-4a13-9ea2-bc1545a64cae-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-l76jv\" (UID: \"f47a4e08-e21f-4a13-9ea2-bc1545a64cae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448140 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f6lk6\" (UniqueName: \"kubernetes.io/projected/89a244ab-c405-48aa-893f-f50995384ede-kube-api-access-f6lk6\") pod \"packageserver-d55dfcdfc-7hjp9\" (UID: \"89a244ab-c405-48aa-893f-f50995384ede\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448167 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f6be9bbf-6799-45e0-8d53-790a5484f3a4-metrics-tls\") pod \"dns-default-skzbg\" (UID: \"f6be9bbf-6799-45e0-8d53-790a5484f3a4\") " pod="openshift-dns/dns-default-skzbg" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448191 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89314349-bbc8-4886-b93b-51358e4e71b0-node-pullsecrets\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448207 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2z67k\" (UniqueName: \"kubernetes.io/projected/e999b5a4-2e54-4195-98fa-4c5fa36f1b3a-kube-api-access-2z67k\") pod \"ingress-canary-qdsgb\" (UID: \"e999b5a4-2e54-4195-98fa-4c5fa36f1b3a\") " pod="openshift-ingress-canary/ingress-canary-qdsgb" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448230 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ksqdw\" (UniqueName: \"kubernetes.io/projected/84e2930a-5ae3-4171-a3dd-e5eea62ef157-kube-api-access-ksqdw\") pod \"route-controller-manager-6576b87f9c-7762w\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448245 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0058e7f4-92db-444d-a979-2880c3f83442-auth-proxy-config\") pod \"machine-approver-56656f9798-cc6m2\" (UID: \"0058e7f4-92db-444d-a979-2880c3f83442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448260 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d031fa1b-4d52-47d7-8c39-5fa21fb6c244-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xv2tk\" (UID: \"d031fa1b-4d52-47d7-8c39-5fa21fb6c244\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448286 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llx87\" (UniqueName: \"kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-kube-api-access-llx87\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448301 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01ff1674-4e01-4cdc-aea3-1e91a6a389e3-metrics-certs\") pod \"router-default-5444994796-5hn9b\" (UID: \"01ff1674-4e01-4cdc-aea3-1e91a6a389e3\") " pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448317 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-registration-dir\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448321 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/89314349-bbc8-4886-b93b-51358e4e71b0-node-pullsecrets\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448333 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pj5gk\" (UniqueName: \"kubernetes.io/projected/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-kube-api-access-pj5gk\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448386 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-plugins-dir\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448403 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4td9\" (UniqueName: \"kubernetes.io/projected/2ad3db81-4cb9-49a5-b4e0-55b546996fa0-kube-api-access-r4td9\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqgfv\" (UID: \"2ad3db81-4cb9-49a5-b4e0-55b546996fa0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448423 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-console-serving-cert\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: E0131 14:43:58.448433 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:43:58.948424684 +0000 UTC m=+143.323137569 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448545 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0058e7f4-92db-444d-a979-2880c3f83442-config\") pod \"machine-approver-56656f9798-cc6m2\" (UID: \"0058e7f4-92db-444d-a979-2880c3f83442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448572 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89314349-bbc8-4886-b93b-51358e4e71b0-encryption-config\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448620 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d031fa1b-4d52-47d7-8c39-5fa21fb6c244-proxy-tls\") pod \"machine-config-controller-84d6567774-xv2tk\" (UID: \"d031fa1b-4d52-47d7-8c39-5fa21fb6c244\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448639 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-s7gwp\" (UID: \"ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448691 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z4lkn\" (UniqueName: \"kubernetes.io/projected/7014a649-2d58-4772-9eb3-697e4b925923-kube-api-access-z4lkn\") pod \"package-server-manager-789f6589d5-kvfvk\" (UID: \"7014a649-2d58-4772-9eb3-697e4b925923\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448719 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5c65\" (UniqueName: \"kubernetes.io/projected/b17c8e83-275b-4777-946a-c7360ad8fa48-kube-api-access-r5c65\") pod \"migrator-59844c95c7-b44gm\" (UID: \"b17c8e83-275b-4777-946a-c7360ad8fa48\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b44gm" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448771 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e2930a-5ae3-4171-a3dd-e5eea62ef157-serving-cert\") pod \"route-controller-manager-6576b87f9c-7762w\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448789 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4-signing-key\") pod \"service-ca-9c57cc56f-vbfvz\" (UID: \"cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4\") " pod="openshift-service-ca/service-ca-9c57cc56f-vbfvz" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.448846 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7014a649-2d58-4772-9eb3-697e4b925923-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kvfvk\" (UID: \"7014a649-2d58-4772-9eb3-697e4b925923\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.449178 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca236cfc-51d0-4d79-b90c-ddac400b4dbb-config\") pod \"service-ca-operator-777779d784-ghblb\" (UID: \"ca236cfc-51d0-4d79-b90c-ddac400b4dbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghblb" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.449227 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4e18e163-6cf0-48ef-9a6f-90cbece870b0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.449247 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-bound-sa-token\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.449279 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-trusted-ca-bundle\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.449317 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89314349-bbc8-4886-b93b-51358e4e71b0-etcd-client\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.449336 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c630253-f658-44fb-891d-f560f1e2b577-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-h4drr\" (UID: \"5c630253-f658-44fb-891d-f560f1e2b577\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h4drr" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.449361 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e18e163-6cf0-48ef-9a6f-90cbece870b0-trusted-ca\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.449401 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9edad05e-bd87-4a20-a947-6b09f9f7c93a-srv-cert\") pod \"catalog-operator-68c6474976-vc9q2\" (UID: \"9edad05e-bd87-4a20-a947-6b09f9f7c93a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.449418 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9edad05e-bd87-4a20-a947-6b09f9f7c93a-profile-collector-cert\") pod \"catalog-operator-68c6474976-vc9q2\" (UID: \"9edad05e-bd87-4a20-a947-6b09f9f7c93a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.449436 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zwjbh\" (UniqueName: \"kubernetes.io/projected/eade01dc-846b-42a8-a6ed-8cf0a0663e82-kube-api-access-zwjbh\") pod \"collect-profiles-29497830-rpwmp\" (UID: \"eade01dc-846b-42a8-a6ed-8cf0a0663e82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.449487 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdnj\" (UniqueName: \"kubernetes.io/projected/ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4-kube-api-access-vwdnj\") pod \"openshift-controller-manager-operator-756b6f6bc6-s7gwp\" (UID: \"ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.449504 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dc13f997-316e-4e81-a56e-0fa6e02d1502-certs\") pod \"machine-config-server-x4njd\" (UID: \"dc13f997-316e-4e81-a56e-0fa6e02d1502\") " pod="openshift-machine-config-operator/machine-config-server-x4njd" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.449518 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/89a244ab-c405-48aa-893f-f50995384ede-tmpfs\") pod \"packageserver-d55dfcdfc-7hjp9\" (UID: \"89a244ab-c405-48aa-893f-f50995384ede\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.449536 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f99779e-5e77-4b5c-8886-7accebe8a897-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7hc86\" (UID: \"9f99779e-5e77-4b5c-8886-7accebe8a897\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451342 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-trusted-ca-bundle\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451340 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/89314349-bbc8-4886-b93b-51358e4e71b0-audit\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451440 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eade01dc-846b-42a8-a6ed-8cf0a0663e82-config-volume\") pod \"collect-profiles-29497830-rpwmp\" (UID: \"eade01dc-846b-42a8-a6ed-8cf0a0663e82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451558 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/89a244ab-c405-48aa-893f-f50995384ede-apiservice-cert\") pod \"packageserver-d55dfcdfc-7hjp9\" (UID: \"89a244ab-c405-48aa-893f-f50995384ede\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451575 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x8tl6\" (UniqueName: \"kubernetes.io/projected/cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4-kube-api-access-x8tl6\") pod \"service-ca-9c57cc56f-vbfvz\" (UID: \"cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4\") " pod="openshift-service-ca/service-ca-9c57cc56f-vbfvz" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451600 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4e18e163-6cf0-48ef-9a6f-90cbece870b0-registry-certificates\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451617 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89314349-bbc8-4886-b93b-51358e4e71b0-audit-dir\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451634 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d7ae01-24ad-448d-ae7c-10df353833f4-config\") pod \"kube-apiserver-operator-766d6c64bb-cp47m\" (UID: \"17d7ae01-24ad-448d-ae7c-10df353833f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451662 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d64a8f76-87cc-45eb-bc92-5802a3db6c3d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-v4px6\" (UID: \"d64a8f76-87cc-45eb-bc92-5802a3db6c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451679 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f99779e-5e77-4b5c-8886-7accebe8a897-srv-cert\") pod \"olm-operator-6b444d44fb-7hc86\" (UID: \"9f99779e-5e77-4b5c-8886-7accebe8a897\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451700 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdrlx\" (UniqueName: \"kubernetes.io/projected/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-kube-api-access-qdrlx\") pod \"marketplace-operator-79b997595-5r6kv\" (UID: \"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451719 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89314349-bbc8-4886-b93b-51358e4e71b0-etcd-serving-ca\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451734 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca236cfc-51d0-4d79-b90c-ddac400b4dbb-serving-cert\") pod \"service-ca-operator-777779d784-ghblb\" (UID: \"ca236cfc-51d0-4d79-b90c-ddac400b4dbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghblb" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01ff1674-4e01-4cdc-aea3-1e91a6a389e3-service-ca-bundle\") pod \"router-default-5444994796-5hn9b\" (UID: \"01ff1674-4e01-4cdc-aea3-1e91a6a389e3\") " pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451767 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4b193a-a01b-440a-a94a-55c4b5f06586-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nz99n\" (UID: \"4c4b193a-a01b-440a-a94a-55c4b5f06586\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451839 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/0058e7f4-92db-444d-a979-2880c3f83442-config\") pod \"machine-approver-56656f9798-cc6m2\" (UID: \"0058e7f4-92db-444d-a979-2880c3f83442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451877 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6be9bbf-6799-45e0-8d53-790a5484f3a4-config-volume\") pod \"dns-default-skzbg\" (UID: \"f6be9bbf-6799-45e0-8d53-790a5484f3a4\") " pod="openshift-dns/dns-default-skzbg" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451935 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4-signing-cabundle\") pod \"service-ca-9c57cc56f-vbfvz\" (UID: \"cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4\") " pod="openshift-service-ca/service-ca-9c57cc56f-vbfvz" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.451953 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m898f\" (UniqueName: \"kubernetes.io/projected/dc13f997-316e-4e81-a56e-0fa6e02d1502-kube-api-access-m898f\") pod \"machine-config-server-x4njd\" (UID: \"dc13f997-316e-4e81-a56e-0fa6e02d1502\") " pod="openshift-machine-config-operator/machine-config-server-x4njd" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452123 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89314349-bbc8-4886-b93b-51358e4e71b0-config\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452176 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-s7gwp\" (UID: \"ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452200 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f47a4e08-e21f-4a13-9ea2-bc1545a64cae-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-l76jv\" (UID: \"f47a4e08-e21f-4a13-9ea2-bc1545a64cae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452283 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d64a8f76-87cc-45eb-bc92-5802a3db6c3d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-v4px6\" (UID: \"d64a8f76-87cc-45eb-bc92-5802a3db6c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452307 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-socket-dir\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452365 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jw467\" (UniqueName: \"kubernetes.io/projected/9edad05e-bd87-4a20-a947-6b09f9f7c93a-kube-api-access-jw467\") pod \"catalog-operator-68c6474976-vc9q2\" (UID: \"9edad05e-bd87-4a20-a947-6b09f9f7c93a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452391 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-service-ca\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452453 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7sjfc\" (UniqueName: \"kubernetes.io/projected/853ca050-beae-4089-a5df-9556eeda508b-kube-api-access-7sjfc\") pod \"cluster-samples-operator-665b6dd947-8m7f4\" (UID: \"853ca050-beae-4089-a5df-9556eeda508b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8m7f4" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452471 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4b193a-a01b-440a-a94a-55c4b5f06586-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nz99n\" (UID: \"4c4b193a-a01b-440a-a94a-55c4b5f06586\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452537 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/853ca050-beae-4089-a5df-9556eeda508b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8m7f4\" (UID: \"853ca050-beae-4089-a5df-9556eeda508b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8m7f4" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452621 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/01ff1674-4e01-4cdc-aea3-1e91a6a389e3-stats-auth\") pod \"router-default-5444994796-5hn9b\" (UID: \"01ff1674-4e01-4cdc-aea3-1e91a6a389e3\") " pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452641 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-console-oauth-config\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452683 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e2930a-5ae3-4171-a3dd-e5eea62ef157-config\") pod \"route-controller-manager-6576b87f9c-7762w\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452845 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-4m7jl"] Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452869 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n8662\" (UniqueName: \"kubernetes.io/projected/01ff1674-4e01-4cdc-aea3-1e91a6a389e3-kube-api-access-n8662\") pod \"router-default-5444994796-5hn9b\" (UID: \"01ff1674-4e01-4cdc-aea3-1e91a6a389e3\") " pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452889 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e999b5a4-2e54-4195-98fa-4c5fa36f1b3a-cert\") pod \"ingress-canary-qdsgb\" (UID: \"e999b5a4-2e54-4195-98fa-4c5fa36f1b3a\") " pod="openshift-ingress-canary/ingress-canary-qdsgb" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452945 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad3db81-4cb9-49a5-b4e0-55b546996fa0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqgfv\" (UID: \"2ad3db81-4cb9-49a5-b4e0-55b546996fa0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452962 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17d7ae01-24ad-448d-ae7c-10df353833f4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cp47m\" (UID: \"17d7ae01-24ad-448d-ae7c-10df353833f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452990 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/01ff1674-4e01-4cdc-aea3-1e91a6a389e3-default-certificate\") pod \"router-default-5444994796-5hn9b\" (UID: \"01ff1674-4e01-4cdc-aea3-1e91a6a389e3\") " pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453007 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/89314349-bbc8-4886-b93b-51358e4e71b0-image-import-ca\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453024 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9wswl\" (UniqueName: \"kubernetes.io/projected/4c4b193a-a01b-440a-a94a-55c4b5f06586-kube-api-access-9wswl\") pod \"openshift-apiserver-operator-796bbdcf4f-nz99n\" (UID: \"4c4b193a-a01b-440a-a94a-55c4b5f06586\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453044 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89314349-bbc8-4886-b93b-51358e4e71b0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453062 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ngnfg\" (UniqueName: \"kubernetes.io/projected/ca236cfc-51d0-4d79-b90c-ddac400b4dbb-kube-api-access-ngnfg\") pod \"service-ca-operator-777779d784-ghblb\" (UID: \"ca236cfc-51d0-4d79-b90c-ddac400b4dbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghblb" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453089 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d64a8f76-87cc-45eb-bc92-5802a3db6c3d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-v4px6\" (UID: \"d64a8f76-87cc-45eb-bc92-5802a3db6c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453107 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gxcmz\" (UniqueName: \"kubernetes.io/projected/5c630253-f658-44fb-891d-f560f1e2b577-kube-api-access-gxcmz\") pod \"control-plane-machine-set-operator-78cbb6b69f-h4drr\" (UID: \"5c630253-f658-44fb-891d-f560f1e2b577\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h4drr" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453151 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lvlhx\" (UniqueName: \"kubernetes.io/projected/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-kube-api-access-lvlhx\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453167 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17d7ae01-24ad-448d-ae7c-10df353833f4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cp47m\" (UID: \"17d7ae01-24ad-448d-ae7c-10df353833f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453194 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4e18e163-6cf0-48ef-9a6f-90cbece870b0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453209 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-console-config\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453226 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5r6kv\" (UID: \"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453251 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0058e7f4-92db-444d-a979-2880c3f83442-machine-approver-tls\") pod \"machine-approver-56656f9798-cc6m2\" (UID: \"0058e7f4-92db-444d-a979-2880c3f83442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453268 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eade01dc-846b-42a8-a6ed-8cf0a0663e82-secret-volume\") pod \"collect-profiles-29497830-rpwmp\" (UID: \"eade01dc-846b-42a8-a6ed-8cf0a0663e82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453303 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-csi-data-dir\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453319 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad3db81-4cb9-49a5-b4e0-55b546996fa0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqgfv\" (UID: \"2ad3db81-4cb9-49a5-b4e0-55b546996fa0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453356 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e2930a-5ae3-4171-a3dd-e5eea62ef157-client-ca\") pod \"route-controller-manager-6576b87f9c-7762w\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453372 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9th82\" (UniqueName: \"kubernetes.io/projected/d031fa1b-4d52-47d7-8c39-5fa21fb6c244-kube-api-access-9th82\") pod \"machine-config-controller-84d6567774-xv2tk\" (UID: \"d031fa1b-4d52-47d7-8c39-5fa21fb6c244\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453388 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47a4e08-e21f-4a13-9ea2-bc1545a64cae-config\") pod \"kube-controller-manager-operator-78b949d7b-l76jv\" (UID: \"f47a4e08-e21f-4a13-9ea2-bc1545a64cae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453404 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5r6kv\" (UID: \"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453460 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89314349-bbc8-4886-b93b-51358e4e71b0-serving-cert\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453476 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-mountpoint-dir\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453522 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-grhf9\" (UniqueName: \"kubernetes.io/projected/9f99779e-5e77-4b5c-8886-7accebe8a897-kube-api-access-grhf9\") pod \"olm-operator-6b444d44fb-7hc86\" (UID: \"9f99779e-5e77-4b5c-8886-7accebe8a897\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453542 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ppx8c\" (UniqueName: \"kubernetes.io/projected/0058e7f4-92db-444d-a979-2880c3f83442-kube-api-access-ppx8c\") pod \"machine-approver-56656f9798-cc6m2\" (UID: \"0058e7f4-92db-444d-a979-2880c3f83442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453559 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btktd\" (UniqueName: \"kubernetes.io/projected/f6be9bbf-6799-45e0-8d53-790a5484f3a4-kube-api-access-btktd\") pod \"dns-default-skzbg\" (UID: \"f6be9bbf-6799-45e0-8d53-790a5484f3a4\") " pod="openshift-dns/dns-default-skzbg" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453642 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhsxz\" (UniqueName: \"kubernetes.io/projected/d64a8f76-87cc-45eb-bc92-5802a3db6c3d-kube-api-access-jhsxz\") pod \"cluster-image-registry-operator-dc59b4c8b-v4px6\" (UID: \"d64a8f76-87cc-45eb-bc92-5802a3db6c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453728 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dc13f997-316e-4e81-a56e-0fa6e02d1502-node-bootstrap-token\") pod \"machine-config-server-x4njd\" (UID: \"dc13f997-316e-4e81-a56e-0fa6e02d1502\") " pod="openshift-machine-config-operator/machine-config-server-x4njd" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.453857 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d031fa1b-4d52-47d7-8c39-5fa21fb6c244-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-xv2tk\" (UID: \"d031fa1b-4d52-47d7-8c39-5fa21fb6c244\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.454161 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e18e163-6cf0-48ef-9a6f-90cbece870b0-trusted-ca\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.449527 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-oauth-serving-cert\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.455986 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/01ff1674-4e01-4cdc-aea3-1e91a6a389e3-service-ca-bundle\") pod \"router-default-5444994796-5hn9b\" (UID: \"01ff1674-4e01-4cdc-aea3-1e91a6a389e3\") " pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.456680 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-s7gwp\" (UID: \"ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.457006 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/89314349-bbc8-4886-b93b-51358e4e71b0-audit-dir\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.452362 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0058e7f4-92db-444d-a979-2880c3f83442-auth-proxy-config\") pod \"machine-approver-56656f9798-cc6m2\" (UID: \"0058e7f4-92db-444d-a979-2880c3f83442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.459863 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-service-ca\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.460944 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4e18e163-6cf0-48ef-9a6f-90cbece870b0-registry-certificates\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.462021 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-console-config\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.462537 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4e18e163-6cf0-48ef-9a6f-90cbece870b0-ca-trust-extracted\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.462753 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e2930a-5ae3-4171-a3dd-e5eea62ef157-client-ca\") pod \"route-controller-manager-6576b87f9c-7762w\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.463932 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/89314349-bbc8-4886-b93b-51358e4e71b0-encryption-config\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.464024 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-s7gwp\" (UID: \"ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.464082 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/17d7ae01-24ad-448d-ae7c-10df353833f4-config\") pod \"kube-apiserver-operator-766d6c64bb-cp47m\" (UID: \"17d7ae01-24ad-448d-ae7c-10df353833f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.465531 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/01ff1674-4e01-4cdc-aea3-1e91a6a389e3-default-certificate\") pod \"router-default-5444994796-5hn9b\" (UID: \"01ff1674-4e01-4cdc-aea3-1e91a6a389e3\") " pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.465990 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d031fa1b-4d52-47d7-8c39-5fa21fb6c244-proxy-tls\") pod \"machine-config-controller-84d6567774-xv2tk\" (UID: \"d031fa1b-4d52-47d7-8c39-5fa21fb6c244\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.466030 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-console-oauth-config\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.466148 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/89314349-bbc8-4886-b93b-51358e4e71b0-serving-cert\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.466395 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4c4b193a-a01b-440a-a94a-55c4b5f06586-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-nz99n\" (UID: \"4c4b193a-a01b-440a-a94a-55c4b5f06586\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.466661 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/89314349-bbc8-4886-b93b-51358e4e71b0-audit\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.466859 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/89314349-bbc8-4886-b93b-51358e4e71b0-etcd-serving-ca\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.467535 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-console-serving-cert\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.468330 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/89314349-bbc8-4886-b93b-51358e4e71b0-image-import-ca\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.468579 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e2930a-5ae3-4171-a3dd-e5eea62ef157-config\") pod \"route-controller-manager-6576b87f9c-7762w\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.468635 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/89314349-bbc8-4886-b93b-51358e4e71b0-trusted-ca-bundle\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.470657 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4e18e163-6cf0-48ef-9a6f-90cbece870b0-installation-pull-secrets\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.470842 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-registry-tls\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.471374 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/0058e7f4-92db-444d-a979-2880c3f83442-machine-approver-tls\") pod \"machine-approver-56656f9798-cc6m2\" (UID: \"0058e7f4-92db-444d-a979-2880c3f83442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.471632 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/01ff1674-4e01-4cdc-aea3-1e91a6a389e3-metrics-certs\") pod \"router-default-5444994796-5hn9b\" (UID: \"01ff1674-4e01-4cdc-aea3-1e91a6a389e3\") " pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.472294 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/01ff1674-4e01-4cdc-aea3-1e91a6a389e3-stats-auth\") pod \"router-default-5444994796-5hn9b\" (UID: \"01ff1674-4e01-4cdc-aea3-1e91a6a389e3\") " pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.472432 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/17d7ae01-24ad-448d-ae7c-10df353833f4-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-cp47m\" (UID: \"17d7ae01-24ad-448d-ae7c-10df353833f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.473643 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e2930a-5ae3-4171-a3dd-e5eea62ef157-serving-cert\") pod \"route-controller-manager-6576b87f9c-7762w\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.477520 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/853ca050-beae-4089-a5df-9556eeda508b-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-8m7f4\" (UID: \"853ca050-beae-4089-a5df-9556eeda508b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8m7f4" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.486510 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.486712 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4c4b193a-a01b-440a-a94a-55c4b5f06586-config\") pod \"openshift-apiserver-operator-796bbdcf4f-nz99n\" (UID: \"4c4b193a-a01b-440a-a94a-55c4b5f06586\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.489190 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/89314349-bbc8-4886-b93b-51358e4e71b0-config\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.490450 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/89314349-bbc8-4886-b93b-51358e4e71b0-etcd-client\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.507379 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtk42\" (UniqueName: \"kubernetes.io/projected/89314349-bbc8-4886-b93b-51358e4e71b0-kube-api-access-mtk42\") pod \"apiserver-76f77b778f-5f7jc\" (UID: \"89314349-bbc8-4886-b93b-51358e4e71b0\") " pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.508890 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sxjf5"] Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.521832 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-hgs4c"] Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.534642 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-bound-sa-token\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.534942 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.553208 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llx87\" (UniqueName: \"kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-kube-api-access-llx87\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555028 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555265 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ngnfg\" (UniqueName: \"kubernetes.io/projected/ca236cfc-51d0-4d79-b90c-ddac400b4dbb-kube-api-access-ngnfg\") pod \"service-ca-operator-777779d784-ghblb\" (UID: \"ca236cfc-51d0-4d79-b90c-ddac400b4dbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghblb" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555324 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d64a8f76-87cc-45eb-bc92-5802a3db6c3d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-v4px6\" (UID: \"d64a8f76-87cc-45eb-bc92-5802a3db6c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555343 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gxcmz\" (UniqueName: \"kubernetes.io/projected/5c630253-f658-44fb-891d-f560f1e2b577-kube-api-access-gxcmz\") pod \"control-plane-machine-set-operator-78cbb6b69f-h4drr\" (UID: \"5c630253-f658-44fb-891d-f560f1e2b577\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h4drr" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555374 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5r6kv\" (UID: \"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555405 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eade01dc-846b-42a8-a6ed-8cf0a0663e82-secret-volume\") pod \"collect-profiles-29497830-rpwmp\" (UID: \"eade01dc-846b-42a8-a6ed-8cf0a0663e82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555420 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-csi-data-dir\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555441 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47a4e08-e21f-4a13-9ea2-bc1545a64cae-config\") pod \"kube-controller-manager-operator-78b949d7b-l76jv\" (UID: \"f47a4e08-e21f-4a13-9ea2-bc1545a64cae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555455 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5r6kv\" (UID: \"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555472 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad3db81-4cb9-49a5-b4e0-55b546996fa0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqgfv\" (UID: \"2ad3db81-4cb9-49a5-b4e0-55b546996fa0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555496 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-mountpoint-dir\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555510 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-grhf9\" (UniqueName: \"kubernetes.io/projected/9f99779e-5e77-4b5c-8886-7accebe8a897-kube-api-access-grhf9\") pod \"olm-operator-6b444d44fb-7hc86\" (UID: \"9f99779e-5e77-4b5c-8886-7accebe8a897\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555531 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btktd\" (UniqueName: \"kubernetes.io/projected/f6be9bbf-6799-45e0-8d53-790a5484f3a4-kube-api-access-btktd\") pod \"dns-default-skzbg\" (UID: \"f6be9bbf-6799-45e0-8d53-790a5484f3a4\") " pod="openshift-dns/dns-default-skzbg" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555547 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhsxz\" (UniqueName: \"kubernetes.io/projected/d64a8f76-87cc-45eb-bc92-5802a3db6c3d-kube-api-access-jhsxz\") pod \"cluster-image-registry-operator-dc59b4c8b-v4px6\" (UID: \"d64a8f76-87cc-45eb-bc92-5802a3db6c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555565 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dc13f997-316e-4e81-a56e-0fa6e02d1502-node-bootstrap-token\") pod \"machine-config-server-x4njd\" (UID: \"dc13f997-316e-4e81-a56e-0fa6e02d1502\") " pod="openshift-machine-config-operator/machine-config-server-x4njd" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555587 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/89a244ab-c405-48aa-893f-f50995384ede-webhook-cert\") pod \"packageserver-d55dfcdfc-7hjp9\" (UID: \"89a244ab-c405-48aa-893f-f50995384ede\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555605 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f47a4e08-e21f-4a13-9ea2-bc1545a64cae-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-l76jv\" (UID: \"f47a4e08-e21f-4a13-9ea2-bc1545a64cae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555621 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f6lk6\" (UniqueName: \"kubernetes.io/projected/89a244ab-c405-48aa-893f-f50995384ede-kube-api-access-f6lk6\") pod \"packageserver-d55dfcdfc-7hjp9\" (UID: \"89a244ab-c405-48aa-893f-f50995384ede\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555638 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f6be9bbf-6799-45e0-8d53-790a5484f3a4-metrics-tls\") pod \"dns-default-skzbg\" (UID: \"f6be9bbf-6799-45e0-8d53-790a5484f3a4\") " pod="openshift-dns/dns-default-skzbg" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555655 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2z67k\" (UniqueName: \"kubernetes.io/projected/e999b5a4-2e54-4195-98fa-4c5fa36f1b3a-kube-api-access-2z67k\") pod \"ingress-canary-qdsgb\" (UID: \"e999b5a4-2e54-4195-98fa-4c5fa36f1b3a\") " pod="openshift-ingress-canary/ingress-canary-qdsgb" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555686 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-registration-dir\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555700 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pj5gk\" (UniqueName: \"kubernetes.io/projected/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-kube-api-access-pj5gk\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555715 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-plugins-dir\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555729 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r4td9\" (UniqueName: \"kubernetes.io/projected/2ad3db81-4cb9-49a5-b4e0-55b546996fa0-kube-api-access-r4td9\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqgfv\" (UID: \"2ad3db81-4cb9-49a5-b4e0-55b546996fa0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555771 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z4lkn\" (UniqueName: \"kubernetes.io/projected/7014a649-2d58-4772-9eb3-697e4b925923-kube-api-access-z4lkn\") pod \"package-server-manager-789f6589d5-kvfvk\" (UID: \"7014a649-2d58-4772-9eb3-697e4b925923\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555786 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5c65\" (UniqueName: \"kubernetes.io/projected/b17c8e83-275b-4777-946a-c7360ad8fa48-kube-api-access-r5c65\") pod \"migrator-59844c95c7-b44gm\" (UID: \"b17c8e83-275b-4777-946a-c7360ad8fa48\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b44gm" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555804 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7014a649-2d58-4772-9eb3-697e4b925923-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kvfvk\" (UID: \"7014a649-2d58-4772-9eb3-697e4b925923\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555818 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca236cfc-51d0-4d79-b90c-ddac400b4dbb-config\") pod \"service-ca-operator-777779d784-ghblb\" (UID: \"ca236cfc-51d0-4d79-b90c-ddac400b4dbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghblb" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555832 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4-signing-key\") pod \"service-ca-9c57cc56f-vbfvz\" (UID: \"cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4\") " pod="openshift-service-ca/service-ca-9c57cc56f-vbfvz" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555849 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c630253-f658-44fb-891d-f560f1e2b577-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-h4drr\" (UID: \"5c630253-f658-44fb-891d-f560f1e2b577\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h4drr" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555865 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9edad05e-bd87-4a20-a947-6b09f9f7c93a-srv-cert\") pod \"catalog-operator-68c6474976-vc9q2\" (UID: \"9edad05e-bd87-4a20-a947-6b09f9f7c93a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555882 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9edad05e-bd87-4a20-a947-6b09f9f7c93a-profile-collector-cert\") pod \"catalog-operator-68c6474976-vc9q2\" (UID: \"9edad05e-bd87-4a20-a947-6b09f9f7c93a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555896 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zwjbh\" (UniqueName: \"kubernetes.io/projected/eade01dc-846b-42a8-a6ed-8cf0a0663e82-kube-api-access-zwjbh\") pod \"collect-profiles-29497830-rpwmp\" (UID: \"eade01dc-846b-42a8-a6ed-8cf0a0663e82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555917 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dc13f997-316e-4e81-a56e-0fa6e02d1502-certs\") pod \"machine-config-server-x4njd\" (UID: \"dc13f997-316e-4e81-a56e-0fa6e02d1502\") " pod="openshift-machine-config-operator/machine-config-server-x4njd" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555937 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/89a244ab-c405-48aa-893f-f50995384ede-tmpfs\") pod \"packageserver-d55dfcdfc-7hjp9\" (UID: \"89a244ab-c405-48aa-893f-f50995384ede\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555951 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f99779e-5e77-4b5c-8886-7accebe8a897-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7hc86\" (UID: \"9f99779e-5e77-4b5c-8886-7accebe8a897\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555968 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eade01dc-846b-42a8-a6ed-8cf0a0663e82-config-volume\") pod \"collect-profiles-29497830-rpwmp\" (UID: \"eade01dc-846b-42a8-a6ed-8cf0a0663e82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.555986 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/89a244ab-c405-48aa-893f-f50995384ede-apiservice-cert\") pod \"packageserver-d55dfcdfc-7hjp9\" (UID: \"89a244ab-c405-48aa-893f-f50995384ede\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.556000 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x8tl6\" (UniqueName: \"kubernetes.io/projected/cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4-kube-api-access-x8tl6\") pod \"service-ca-9c57cc56f-vbfvz\" (UID: \"cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4\") " pod="openshift-service-ca/service-ca-9c57cc56f-vbfvz" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.556016 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d64a8f76-87cc-45eb-bc92-5802a3db6c3d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-v4px6\" (UID: \"d64a8f76-87cc-45eb-bc92-5802a3db6c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.556032 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f99779e-5e77-4b5c-8886-7accebe8a897-srv-cert\") pod \"olm-operator-6b444d44fb-7hc86\" (UID: \"9f99779e-5e77-4b5c-8886-7accebe8a897\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.556049 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdrlx\" (UniqueName: \"kubernetes.io/projected/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-kube-api-access-qdrlx\") pod \"marketplace-operator-79b997595-5r6kv\" (UID: \"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.556063 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca236cfc-51d0-4d79-b90c-ddac400b4dbb-serving-cert\") pod \"service-ca-operator-777779d784-ghblb\" (UID: \"ca236cfc-51d0-4d79-b90c-ddac400b4dbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghblb" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.556093 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6be9bbf-6799-45e0-8d53-790a5484f3a4-config-volume\") pod \"dns-default-skzbg\" (UID: \"f6be9bbf-6799-45e0-8d53-790a5484f3a4\") " pod="openshift-dns/dns-default-skzbg" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.556109 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f47a4e08-e21f-4a13-9ea2-bc1545a64cae-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-l76jv\" (UID: \"f47a4e08-e21f-4a13-9ea2-bc1545a64cae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.556124 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d64a8f76-87cc-45eb-bc92-5802a3db6c3d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-v4px6\" (UID: \"d64a8f76-87cc-45eb-bc92-5802a3db6c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.556137 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-socket-dir\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.556163 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4-signing-cabundle\") pod \"service-ca-9c57cc56f-vbfvz\" (UID: \"cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4\") " pod="openshift-service-ca/service-ca-9c57cc56f-vbfvz" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.556178 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m898f\" (UniqueName: \"kubernetes.io/projected/dc13f997-316e-4e81-a56e-0fa6e02d1502-kube-api-access-m898f\") pod \"machine-config-server-x4njd\" (UID: \"dc13f997-316e-4e81-a56e-0fa6e02d1502\") " pod="openshift-machine-config-operator/machine-config-server-x4njd" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.556192 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jw467\" (UniqueName: \"kubernetes.io/projected/9edad05e-bd87-4a20-a947-6b09f9f7c93a-kube-api-access-jw467\") pod \"catalog-operator-68c6474976-vc9q2\" (UID: \"9edad05e-bd87-4a20-a947-6b09f9f7c93a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.556248 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e999b5a4-2e54-4195-98fa-4c5fa36f1b3a-cert\") pod \"ingress-canary-qdsgb\" (UID: \"e999b5a4-2e54-4195-98fa-4c5fa36f1b3a\") " pod="openshift-ingress-canary/ingress-canary-qdsgb" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.556267 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad3db81-4cb9-49a5-b4e0-55b546996fa0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqgfv\" (UID: \"2ad3db81-4cb9-49a5-b4e0-55b546996fa0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.556841 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2ad3db81-4cb9-49a5-b4e0-55b546996fa0-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqgfv\" (UID: \"2ad3db81-4cb9-49a5-b4e0-55b546996fa0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv" Jan 31 14:43:58 crc kubenswrapper[4751]: E0131 14:43:58.556939 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:43:59.056897538 +0000 UTC m=+143.431610423 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.559230 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-csi-data-dir\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.560594 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-socket-dir\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.561550 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/d64a8f76-87cc-45eb-bc92-5802a3db6c3d-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-v4px6\" (UID: \"d64a8f76-87cc-45eb-bc92-5802a3db6c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.561729 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4-signing-cabundle\") pod \"service-ca-9c57cc56f-vbfvz\" (UID: \"cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4\") " pod="openshift-service-ca/service-ca-9c57cc56f-vbfvz" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.562289 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eade01dc-846b-42a8-a6ed-8cf0a0663e82-secret-volume\") pod \"collect-profiles-29497830-rpwmp\" (UID: \"eade01dc-846b-42a8-a6ed-8cf0a0663e82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.562800 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f6be9bbf-6799-45e0-8d53-790a5484f3a4-config-volume\") pod \"dns-default-skzbg\" (UID: \"f6be9bbf-6799-45e0-8d53-790a5484f3a4\") " pod="openshift-dns/dns-default-skzbg" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.563222 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-5r6kv\" (UID: \"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.564932 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-5r6kv\" (UID: \"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.565348 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-registration-dir\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.565386 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-plugins-dir\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.565909 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-mountpoint-dir\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.566095 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/89a244ab-c405-48aa-893f-f50995384ede-tmpfs\") pod \"packageserver-d55dfcdfc-7hjp9\" (UID: \"89a244ab-c405-48aa-893f-f50995384ede\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.566233 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ca236cfc-51d0-4d79-b90c-ddac400b4dbb-config\") pod \"service-ca-operator-777779d784-ghblb\" (UID: \"ca236cfc-51d0-4d79-b90c-ddac400b4dbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghblb" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.566305 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eade01dc-846b-42a8-a6ed-8cf0a0663e82-config-volume\") pod \"collect-profiles-29497830-rpwmp\" (UID: \"eade01dc-846b-42a8-a6ed-8cf0a0663e82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.566982 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4-signing-key\") pod \"service-ca-9c57cc56f-vbfvz\" (UID: \"cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4\") " pod="openshift-service-ca/service-ca-9c57cc56f-vbfvz" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.567087 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f47a4e08-e21f-4a13-9ea2-bc1545a64cae-config\") pod \"kube-controller-manager-operator-78b949d7b-l76jv\" (UID: \"f47a4e08-e21f-4a13-9ea2-bc1545a64cae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.567859 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/dc13f997-316e-4e81-a56e-0fa6e02d1502-certs\") pod \"machine-config-server-x4njd\" (UID: \"dc13f997-316e-4e81-a56e-0fa6e02d1502\") " pod="openshift-machine-config-operator/machine-config-server-x4njd" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.569731 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9edad05e-bd87-4a20-a947-6b09f9f7c93a-profile-collector-cert\") pod \"catalog-operator-68c6474976-vc9q2\" (UID: \"9edad05e-bd87-4a20-a947-6b09f9f7c93a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.571249 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9edad05e-bd87-4a20-a947-6b09f9f7c93a-srv-cert\") pod \"catalog-operator-68c6474976-vc9q2\" (UID: \"9edad05e-bd87-4a20-a947-6b09f9f7c93a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.574257 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2ad3db81-4cb9-49a5-b4e0-55b546996fa0-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqgfv\" (UID: \"2ad3db81-4cb9-49a5-b4e0-55b546996fa0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.577272 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f47a4e08-e21f-4a13-9ea2-bc1545a64cae-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-l76jv\" (UID: \"f47a4e08-e21f-4a13-9ea2-bc1545a64cae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.577396 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/5c630253-f658-44fb-891d-f560f1e2b577-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-h4drr\" (UID: \"5c630253-f658-44fb-891d-f560f1e2b577\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h4drr" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.577542 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/7014a649-2d58-4772-9eb3-697e4b925923-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-kvfvk\" (UID: \"7014a649-2d58-4772-9eb3-697e4b925923\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.577567 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/9f99779e-5e77-4b5c-8886-7accebe8a897-profile-collector-cert\") pod \"olm-operator-6b444d44fb-7hc86\" (UID: \"9f99779e-5e77-4b5c-8886-7accebe8a897\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.577718 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/89a244ab-c405-48aa-893f-f50995384ede-webhook-cert\") pod \"packageserver-d55dfcdfc-7hjp9\" (UID: \"89a244ab-c405-48aa-893f-f50995384ede\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.578042 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/dc13f997-316e-4e81-a56e-0fa6e02d1502-node-bootstrap-token\") pod \"machine-config-server-x4njd\" (UID: \"dc13f997-316e-4e81-a56e-0fa6e02d1502\") " pod="openshift-machine-config-operator/machine-config-server-x4njd" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.578242 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/89a244ab-c405-48aa-893f-f50995384ede-apiservice-cert\") pod \"packageserver-d55dfcdfc-7hjp9\" (UID: \"89a244ab-c405-48aa-893f-f50995384ede\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.578835 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/9f99779e-5e77-4b5c-8886-7accebe8a897-srv-cert\") pod \"olm-operator-6b444d44fb-7hc86\" (UID: \"9f99779e-5e77-4b5c-8886-7accebe8a897\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.579295 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ca236cfc-51d0-4d79-b90c-ddac400b4dbb-serving-cert\") pod \"service-ca-operator-777779d784-ghblb\" (UID: \"ca236cfc-51d0-4d79-b90c-ddac400b4dbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghblb" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.582815 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwdnj\" (UniqueName: \"kubernetes.io/projected/ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4-kube-api-access-vwdnj\") pod \"openshift-controller-manager-operator-756b6f6bc6-s7gwp\" (UID: \"ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.592376 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.594945 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/17d7ae01-24ad-448d-ae7c-10df353833f4-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-cp47m\" (UID: \"17d7ae01-24ad-448d-ae7c-10df353833f4\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.612884 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9wswl\" (UniqueName: \"kubernetes.io/projected/4c4b193a-a01b-440a-a94a-55c4b5f06586-kube-api-access-9wswl\") pod \"openshift-apiserver-operator-796bbdcf4f-nz99n\" (UID: \"4c4b193a-a01b-440a-a94a-55c4b5f06586\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.637172 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7sjfc\" (UniqueName: \"kubernetes.io/projected/853ca050-beae-4089-a5df-9556eeda508b-kube-api-access-7sjfc\") pod \"cluster-samples-operator-665b6dd947-8m7f4\" (UID: \"853ca050-beae-4089-a5df-9556eeda508b\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8m7f4" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.654891 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/e999b5a4-2e54-4195-98fa-4c5fa36f1b3a-cert\") pod \"ingress-canary-qdsgb\" (UID: \"e999b5a4-2e54-4195-98fa-4c5fa36f1b3a\") " pod="openshift-ingress-canary/ingress-canary-qdsgb" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.654968 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ksqdw\" (UniqueName: \"kubernetes.io/projected/84e2930a-5ae3-4171-a3dd-e5eea62ef157-kube-api-access-ksqdw\") pod \"route-controller-manager-6576b87f9c-7762w\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.655507 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/d64a8f76-87cc-45eb-bc92-5802a3db6c3d-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-v4px6\" (UID: \"d64a8f76-87cc-45eb-bc92-5802a3db6c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.659728 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/f6be9bbf-6799-45e0-8d53-790a5484f3a4-metrics-tls\") pod \"dns-default-skzbg\" (UID: \"f6be9bbf-6799-45e0-8d53-790a5484f3a4\") " pod="openshift-dns/dns-default-skzbg" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.661497 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: E0131 14:43:58.662170 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:43:59.162153398 +0000 UTC m=+143.536866283 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.662809 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h"] Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.663785 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lvlhx\" (UniqueName: \"kubernetes.io/projected/5caeb3dc-2a42-41b5-ac91-c1c8a216fb43-kube-api-access-lvlhx\") pod \"console-f9d7485db-h262z\" (UID: \"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43\") " pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.673012 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9th82\" (UniqueName: \"kubernetes.io/projected/d031fa1b-4d52-47d7-8c39-5fa21fb6c244-kube-api-access-9th82\") pod \"machine-config-controller-84d6567774-xv2tk\" (UID: \"d031fa1b-4d52-47d7-8c39-5fa21fb6c244\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk" Jan 31 14:43:58 crc kubenswrapper[4751]: W0131 14:43:58.687860 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d8dfb13_f0a0_465c_821d_95f0df0a98cf.slice/crio-f4985d6fa20c1f7c84743bb91673b6608bdb784531485a09c314e16cd026c074 WatchSource:0}: Error finding container f4985d6fa20c1f7c84743bb91673b6608bdb784531485a09c314e16cd026c074: Status 404 returned error can't find the container with id f4985d6fa20c1f7c84743bb91673b6608bdb784531485a09c314e16cd026c074 Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.691948 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n8662\" (UniqueName: \"kubernetes.io/projected/01ff1674-4e01-4cdc-aea3-1e91a6a389e3-kube-api-access-n8662\") pod \"router-default-5444994796-5hn9b\" (UID: \"01ff1674-4e01-4cdc-aea3-1e91a6a389e3\") " pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.693286 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx"] Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.712806 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ppx8c\" (UniqueName: \"kubernetes.io/projected/0058e7f4-92db-444d-a979-2880c3f83442-kube-api-access-ppx8c\") pod \"machine-approver-56656f9798-cc6m2\" (UID: \"0058e7f4-92db-444d-a979-2880c3f83442\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.750246 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ngnfg\" (UniqueName: \"kubernetes.io/projected/ca236cfc-51d0-4d79-b90c-ddac400b4dbb-kube-api-access-ngnfg\") pod \"service-ca-operator-777779d784-ghblb\" (UID: \"ca236cfc-51d0-4d79-b90c-ddac400b4dbb\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghblb" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.762991 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:43:58 crc kubenswrapper[4751]: E0131 14:43:58.763395 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:43:59.263381631 +0000 UTC m=+143.638094516 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.774105 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8m7f4" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.780885 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/d64a8f76-87cc-45eb-bc92-5802a3db6c3d-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-v4px6\" (UID: \"d64a8f76-87cc-45eb-bc92-5802a3db6c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.805254 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-v8p8j"] Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.808043 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gxcmz\" (UniqueName: \"kubernetes.io/projected/5c630253-f658-44fb-891d-f560f1e2b577-kube-api-access-gxcmz\") pod \"control-plane-machine-set-operator-78cbb6b69f-h4drr\" (UID: \"5c630253-f658-44fb-891d-f560f1e2b577\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h4drr" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.811190 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xr2gt"] Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.811834 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-8fgxq"] Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.819881 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m898f\" (UniqueName: \"kubernetes.io/projected/dc13f997-316e-4e81-a56e-0fa6e02d1502-kube-api-access-m898f\") pod \"machine-config-server-x4njd\" (UID: \"dc13f997-316e-4e81-a56e-0fa6e02d1502\") " pod="openshift-machine-config-operator/machine-config-server-x4njd" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.825769 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.830493 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jw467\" (UniqueName: \"kubernetes.io/projected/9edad05e-bd87-4a20-a947-6b09f9f7c93a-kube-api-access-jw467\") pod \"catalog-operator-68c6474976-vc9q2\" (UID: \"9edad05e-bd87-4a20-a947-6b09f9f7c93a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.838009 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.844457 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.847501 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp"] Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.852113 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zwjbh\" (UniqueName: \"kubernetes.io/projected/eade01dc-846b-42a8-a6ed-8cf0a0663e82-kube-api-access-zwjbh\") pod \"collect-profiles-29497830-rpwmp\" (UID: \"eade01dc-846b-42a8-a6ed-8cf0a0663e82\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.858021 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.863666 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.864444 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: E0131 14:43:58.864730 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:43:59.364716077 +0000 UTC m=+143.739428962 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.869589 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f47a4e08-e21f-4a13-9ea2-bc1545a64cae-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-l76jv\" (UID: \"f47a4e08-e21f-4a13-9ea2-bc1545a64cae\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.892984 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f6lk6\" (UniqueName: \"kubernetes.io/projected/89a244ab-c405-48aa-893f-f50995384ede-kube-api-access-f6lk6\") pod \"packageserver-d55dfcdfc-7hjp9\" (UID: \"89a244ab-c405-48aa-893f-f50995384ede\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.899137 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-5f7jc"] Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.910659 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdrlx\" (UniqueName: \"kubernetes.io/projected/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-kube-api-access-qdrlx\") pod \"marketplace-operator-79b997595-5r6kv\" (UID: \"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea\") " pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.927466 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.930154 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr"] Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.930288 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.934470 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btktd\" (UniqueName: \"kubernetes.io/projected/f6be9bbf-6799-45e0-8d53-790a5484f3a4-kube-api-access-btktd\") pod \"dns-default-skzbg\" (UID: \"f6be9bbf-6799-45e0-8d53-790a5484f3a4\") " pod="openshift-dns/dns-default-skzbg" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.938882 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-db5pg"] Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.951510 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r4td9\" (UniqueName: \"kubernetes.io/projected/2ad3db81-4cb9-49a5-b4e0-55b546996fa0-kube-api-access-r4td9\") pod \"kube-storage-version-migrator-operator-b67b599dd-xqgfv\" (UID: \"2ad3db81-4cb9-49a5-b4e0-55b546996fa0\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.952325 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.965899 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:43:58 crc kubenswrapper[4751]: E0131 14:43:58.966132 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:43:59.466091034 +0000 UTC m=+143.840803979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.966366 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:58 crc kubenswrapper[4751]: E0131 14:43:58.966703 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:43:59.466687049 +0000 UTC m=+143.841399934 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:58 crc kubenswrapper[4751]: W0131 14:43:58.977334 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb9810521_7440_49d4_bf04_7dbe3324cc5b.slice/crio-281ca335e21d1ef7df4a991d6128059fdc2c0654efcf33689fe07a1e7199b3d7 WatchSource:0}: Error finding container 281ca335e21d1ef7df4a991d6128059fdc2c0654efcf33689fe07a1e7199b3d7: Status 404 returned error can't find the container with id 281ca335e21d1ef7df4a991d6128059fdc2c0654efcf33689fe07a1e7199b3d7 Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.978894 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv" Jan 31 14:43:58 crc kubenswrapper[4751]: W0131 14:43:58.981975 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod802d5225_ef3f_485c_bb85_3c0f18e42952.slice/crio-5084329a5b9f4efb799b2485cd137ef3c2a4c4cd5ed6e746dd1d5ef125ea23bd WatchSource:0}: Error finding container 5084329a5b9f4efb799b2485cd137ef3c2a4c4cd5ed6e746dd1d5ef125ea23bd: Status 404 returned error can't find the container with id 5084329a5b9f4efb799b2485cd137ef3c2a4c4cd5ed6e746dd1d5ef125ea23bd Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.983026 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pj5gk\" (UniqueName: \"kubernetes.io/projected/a55fc688-004a-4d6f-a48e-c10b0ae1d8f1-kube-api-access-pj5gk\") pod \"csi-hostpathplugin-x4rnh\" (UID: \"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1\") " pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.986360 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.993202 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h4drr" Jan 31 14:43:58 crc kubenswrapper[4751]: I0131 14:43:58.995786 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2z67k\" (UniqueName: \"kubernetes.io/projected/e999b5a4-2e54-4195-98fa-4c5fa36f1b3a-kube-api-access-2z67k\") pod \"ingress-canary-qdsgb\" (UID: \"e999b5a4-2e54-4195-98fa-4c5fa36f1b3a\") " pod="openshift-ingress-canary/ingress-canary-qdsgb" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.000498 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghblb" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.008031 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.012771 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-grhf9\" (UniqueName: \"kubernetes.io/projected/9f99779e-5e77-4b5c-8886-7accebe8a897-kube-api-access-grhf9\") pod \"olm-operator-6b444d44fb-7hc86\" (UID: \"9f99779e-5e77-4b5c-8886-7accebe8a897\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.014682 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" Jan 31 14:43:59 crc kubenswrapper[4751]: W0131 14:43:59.015133 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf13811e7_14eb_4a17_90a1_345619f9fb29.slice/crio-0a19bbbc968d497d45ccaceb151b6f5c6cef19ea57a552f00fe6312a063c997d WatchSource:0}: Error finding container 0a19bbbc968d497d45ccaceb151b6f5c6cef19ea57a552f00fe6312a063c997d: Status 404 returned error can't find the container with id 0a19bbbc968d497d45ccaceb151b6f5c6cef19ea57a552f00fe6312a063c997d Jan 31 14:43:59 crc kubenswrapper[4751]: W0131 14:43:59.016175 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6a74f65d_f8d2_41af_8469_6f8d020b41de.slice/crio-ad3a2c6a62881881dc1b8603705ab22d4020343bea6e9b8d294520bb64a2c111 WatchSource:0}: Error finding container ad3a2c6a62881881dc1b8603705ab22d4020343bea6e9b8d294520bb64a2c111: Status 404 returned error can't find the container with id ad3a2c6a62881881dc1b8603705ab22d4020343bea6e9b8d294520bb64a2c111 Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.022011 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.036274 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.040238 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x8tl6\" (UniqueName: \"kubernetes.io/projected/cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4-kube-api-access-x8tl6\") pod \"service-ca-9c57cc56f-vbfvz\" (UID: \"cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4\") " pod="openshift-service-ca/service-ca-9c57cc56f-vbfvz" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.041794 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-skzbg" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.048574 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-x4njd" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.050405 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" event={"ID":"802d5225-ef3f-485c-bb85-3c0f18e42952","Type":"ContainerStarted","Data":"5084329a5b9f4efb799b2485cd137ef3c2a4c4cd5ed6e746dd1d5ef125ea23bd"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.053406 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" event={"ID":"466718f1-f118-4f13-a983-14060aef09e6","Type":"ContainerStarted","Data":"fd0306ff302533f02a3392bf052bc01c096f3c7f4bb2c4d47d41b58b4fc67bb8"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.053997 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr" event={"ID":"f13811e7-14eb-4a17-90a1-345619f9fb29","Type":"ContainerStarted","Data":"0a19bbbc968d497d45ccaceb151b6f5c6cef19ea57a552f00fe6312a063c997d"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.055256 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" event={"ID":"aceeef0f-cb36-43d6-8e09-35949fe73911","Type":"ContainerStarted","Data":"26ee040182e4644288f05d6a5f8a159c58f3e31b7f0c23c319060a2d9e2b325f"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.056007 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp" event={"ID":"ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4","Type":"ContainerStarted","Data":"a7e080d2bb12b2ede9d3754478b51602125694dbb1c64477101f2370cec9aee2"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.056810 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" event={"ID":"bcd7a932-6db9-4cca-b619-852242324725","Type":"ContainerStarted","Data":"9a0b4fb67f53340e222c61b9f6faae13df637f62fa98532035ec8c6f50ad1e2e"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.056839 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" event={"ID":"bcd7a932-6db9-4cca-b619-852242324725","Type":"ContainerStarted","Data":"365a012e0dc256fca522addd41c83a63fb808621ec56987dcd5bb062ddda6006"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.057388 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" event={"ID":"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7","Type":"ContainerStarted","Data":"fd4bcb3bdf010859761ec964c7da87c34b995e58983fa9a8f2551a87d49615ca"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.059272 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" event={"ID":"2d8dfb13-f0a0-465c-821d-95f0df0a98cf","Type":"ContainerStarted","Data":"f4985d6fa20c1f7c84743bb91673b6608bdb784531485a09c314e16cd026c074"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.059795 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8fgxq" event={"ID":"4ba2ceb2-34e1-487c-9b13-0a480d6cc521","Type":"ContainerStarted","Data":"9f4bdfb82d894279880a03632f547065caa26381e56b663179a66bc5f2693f47"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.060058 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhsxz\" (UniqueName: \"kubernetes.io/projected/d64a8f76-87cc-45eb-bc92-5802a3db6c3d-kube-api-access-jhsxz\") pod \"cluster-image-registry-operator-dc59b4c8b-v4px6\" (UID: \"d64a8f76-87cc-45eb-bc92-5802a3db6c3d\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.060509 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" event={"ID":"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8","Type":"ContainerStarted","Data":"f58e74380a8c1e3f0d559b0c6a44b9911f247b06dc418233b9c41d9a25e6f05e"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.061323 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-v8p8j" event={"ID":"b9810521-7440-49d4-bf04-7dbe3324cc5b","Type":"ContainerStarted","Data":"281ca335e21d1ef7df4a991d6128059fdc2c0654efcf33689fe07a1e7199b3d7"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.062270 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4m7jl" event={"ID":"d723501b-bb29-4d60-ad97-239eb749771f","Type":"ContainerStarted","Data":"55f3f39b8468ee5b14f5e58ff28b8af28a163986936638ae36501a8b019fc5b0"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.063276 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" event={"ID":"e14d9fb0-f377-4331-8bc1-8f4017bb95a3","Type":"ContainerStarted","Data":"9041d69395149888edcf83d772a3e7d07e853e34f5584fc9f9c54da93668e0ec"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.063299 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" event={"ID":"e14d9fb0-f377-4331-8bc1-8f4017bb95a3","Type":"ContainerStarted","Data":"b04fce866f961463553a14856d3791616fdd2474de989b38b991dcb6c8a6211a"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.063760 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-db5pg" event={"ID":"6a74f65d-f8d2-41af-8469-6f8d020b41de","Type":"ContainerStarted","Data":"ad3a2c6a62881881dc1b8603705ab22d4020343bea6e9b8d294520bb64a2c111"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.064247 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" event={"ID":"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6","Type":"ContainerStarted","Data":"ab97991b8db78d8e752d40d1152ee78f2b57fd461dd5402070894dd0d8f788b9"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.064995 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" event={"ID":"89314349-bbc8-4886-b93b-51358e4e71b0","Type":"ContainerStarted","Data":"967470003d103abe0f15cb3aee95279d6db37859b3df7fae4ba6e52e803a97ed"} Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.069960 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.070154 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:43:59 crc kubenswrapper[4751]: E0131 14:43:59.070690 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:43:59.570671905 +0000 UTC m=+143.945384780 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.075429 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-qdsgb" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.076525 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5c65\" (UniqueName: \"kubernetes.io/projected/b17c8e83-275b-4777-946a-c7360ad8fa48-kube-api-access-r5c65\") pod \"migrator-59844c95c7-b44gm\" (UID: \"b17c8e83-275b-4777-946a-c7360ad8fa48\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b44gm" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.109708 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z4lkn\" (UniqueName: \"kubernetes.io/projected/7014a649-2d58-4772-9eb3-697e4b925923-kube-api-access-z4lkn\") pod \"package-server-manager-789f6589d5-kvfvk\" (UID: \"7014a649-2d58-4772-9eb3-697e4b925923\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.172082 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:59 crc kubenswrapper[4751]: E0131 14:43:59.172367 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:43:59.67235567 +0000 UTC m=+144.047068555 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.259480 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b44gm" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.265947 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.273140 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.273214 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk" Jan 31 14:43:59 crc kubenswrapper[4751]: E0131 14:43:59.273504 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:43:59.773489841 +0000 UTC m=+144.148202726 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.293043 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-ghblb"] Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.325942 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m"] Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.335676 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-vbfvz" Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.375026 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:59 crc kubenswrapper[4751]: E0131 14:43:59.375394 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:43:59.875376821 +0000 UTC m=+144.250089706 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.475789 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:43:59 crc kubenswrapper[4751]: E0131 14:43:59.476048 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:43:59.976002299 +0000 UTC m=+144.350715184 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.476318 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:59 crc kubenswrapper[4751]: E0131 14:43:59.476662 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:43:59.976642785 +0000 UTC m=+144.351355760 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.531024 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-h262z"] Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.569283 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w"] Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.577022 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:43:59 crc kubenswrapper[4751]: E0131 14:43:59.577183 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:00.07715561 +0000 UTC m=+144.451868505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.577611 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:59 crc kubenswrapper[4751]: E0131 14:43:59.577918 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:00.077906199 +0000 UTC m=+144.452619084 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:59 crc kubenswrapper[4751]: W0131 14:43:59.676924 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podca236cfc_51d0_4d79_b90c_ddac400b4dbb.slice/crio-e8aeb7923ac83033c9a62d8ea8d64c7240a8bb3aff16808268712db72259da0c WatchSource:0}: Error finding container e8aeb7923ac83033c9a62d8ea8d64c7240a8bb3aff16808268712db72259da0c: Status 404 returned error can't find the container with id e8aeb7923ac83033c9a62d8ea8d64c7240a8bb3aff16808268712db72259da0c Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.679456 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:43:59 crc kubenswrapper[4751]: E0131 14:43:59.679625 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:00.179596255 +0000 UTC m=+144.554309140 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.679759 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:59 crc kubenswrapper[4751]: E0131 14:43:59.680112 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:00.180095768 +0000 UTC m=+144.554808663 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.781492 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:43:59 crc kubenswrapper[4751]: E0131 14:43:59.781846 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:00.281833184 +0000 UTC m=+144.656546069 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.818069 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv"] Jan 31 14:43:59 crc kubenswrapper[4751]: W0131 14:43:59.832940 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5caeb3dc_2a42_41b5_ac91_c1c8a216fb43.slice/crio-e4a212221c51a253a85cdbe4957b312fae8746c13753f6177155b8edc4477e15 WatchSource:0}: Error finding container e4a212221c51a253a85cdbe4957b312fae8746c13753f6177155b8edc4477e15: Status 404 returned error can't find the container with id e4a212221c51a253a85cdbe4957b312fae8746c13753f6177155b8edc4477e15 Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.867956 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk"] Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.882993 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:59 crc kubenswrapper[4751]: E0131 14:43:59.883419 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:00.383399646 +0000 UTC m=+144.758112631 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:59 crc kubenswrapper[4751]: W0131 14:43:59.991301 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod01ff1674_4e01_4cdc_aea3_1e91a6a389e3.slice/crio-840836a6c72b8c7d0d3660d9fcc654730f428224f8ceaf3e509f7e3cfce1d3ce WatchSource:0}: Error finding container 840836a6c72b8c7d0d3660d9fcc654730f428224f8ceaf3e509f7e3cfce1d3ce: Status 404 returned error can't find the container with id 840836a6c72b8c7d0d3660d9fcc654730f428224f8ceaf3e509f7e3cfce1d3ce Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.993343 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:43:59 crc kubenswrapper[4751]: E0131 14:43:59.993553 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:00.493529584 +0000 UTC m=+144.868242469 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:43:59 crc kubenswrapper[4751]: I0131 14:43:59.993594 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:43:59 crc kubenswrapper[4751]: E0131 14:43:59.994018 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:00.494012207 +0000 UTC m=+144.868725092 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.073355 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" event={"ID":"0058e7f4-92db-444d-a979-2880c3f83442","Type":"ContainerStarted","Data":"de9ae213b49446be120f1f31c36cc3cd78dec93edd36c195b61cb811975150e2"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.079406 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" event={"ID":"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8","Type":"ContainerStarted","Data":"96a0531e47323a9257c24b651a7067cc71a6c2a1c9189022bfa8c72e23c446c1"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.082177 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv"] Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.095008 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:00 crc kubenswrapper[4751]: E0131 14:44:00.095686 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:00.595664681 +0000 UTC m=+144.970377576 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.107570 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" event={"ID":"b1b479ec-e8d7-4fb6-8d0b-9fac28697df7","Type":"ContainerStarted","Data":"b65a4ce95281de25d0b50e1636e0b5f707265472aa0f98791c8f5c8c9aacc9dd"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.164744 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5hn9b" event={"ID":"01ff1674-4e01-4cdc-aea3-1e91a6a389e3","Type":"ContainerStarted","Data":"840836a6c72b8c7d0d3660d9fcc654730f428224f8ceaf3e509f7e3cfce1d3ce"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.177467 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8m7f4"] Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.194058 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" event={"ID":"aceeef0f-cb36-43d6-8e09-35949fe73911","Type":"ContainerStarted","Data":"58e4e3cb7775c4ff4af54ae9e1ddd2a0c18e2492ba19b4029ef965cebf70ade9"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.198780 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:00 crc kubenswrapper[4751]: E0131 14:44:00.200150 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:00.700060348 +0000 UTC m=+145.074773233 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.202500 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m" event={"ID":"17d7ae01-24ad-448d-ae7c-10df353833f4","Type":"ContainerStarted","Data":"11a9ef810e22dda615de31154140864da1e74955db441a99400f8429aea57e2b"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.203620 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-4m7jl" event={"ID":"d723501b-bb29-4d60-ad97-239eb749771f","Type":"ContainerStarted","Data":"c30c3c72b70b50a48a10df46b410ef194d9cd97429bf90377450d5f25486d136"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.203883 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-4m7jl" Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.209545 4751 generic.go:334] "Generic (PLEG): container finished" podID="e14d9fb0-f377-4331-8bc1-8f4017bb95a3" containerID="9041d69395149888edcf83d772a3e7d07e853e34f5584fc9f9c54da93668e0ec" exitCode=0 Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.209646 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" event={"ID":"e14d9fb0-f377-4331-8bc1-8f4017bb95a3","Type":"ContainerDied","Data":"9041d69395149888edcf83d772a3e7d07e853e34f5584fc9f9c54da93668e0ec"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.211244 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" event={"ID":"2d8dfb13-f0a0-465c-821d-95f0df0a98cf","Type":"ContainerStarted","Data":"a0c496cc6bc86826d7fe98acbe97e89bc0f321821d928b23952b9f122be93fc1"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.212695 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-4m7jl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.212712 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h4drr"] Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.212731 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4m7jl" podUID="d723501b-bb29-4d60-ad97-239eb749771f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.214420 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h262z" event={"ID":"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43","Type":"ContainerStarted","Data":"e4a212221c51a253a85cdbe4957b312fae8746c13753f6177155b8edc4477e15"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.216216 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv" event={"ID":"f47a4e08-e21f-4a13-9ea2-bc1545a64cae","Type":"ContainerStarted","Data":"e428f36863e8b2a4199be5e87a3bbb26686432fcbc853e16723a120cc64b0c3b"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.217723 4751 generic.go:334] "Generic (PLEG): container finished" podID="5f75ab4e-45c1-4ed9-b966-afa91dbc88a6" containerID="c94a9d68d990b53dfe15b33171acb76b1530c8276359ca051256f382f8210faf" exitCode=0 Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.217776 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" event={"ID":"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6","Type":"ContainerDied","Data":"c94a9d68d990b53dfe15b33171acb76b1530c8276359ca051256f382f8210faf"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.225114 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp" event={"ID":"ac1cf81b-8ec8-4ae4-bfb3-d46bb75f24d4","Type":"ContainerStarted","Data":"1a9425425136bc0fcd1b4e7782f30ce168735073d7c66c4111fc0e6d33b533c2"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.229574 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk" event={"ID":"d031fa1b-4d52-47d7-8c39-5fa21fb6c244","Type":"ContainerStarted","Data":"926223eb332d6bd0fd3e2c89d69a100612602a488e2aab796efe3cf85cf01deb"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.231051 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" event={"ID":"84e2930a-5ae3-4171-a3dd-e5eea62ef157","Type":"ContainerStarted","Data":"15e734ffd4fba2493be6a9b1bfbac50c0f6bd9a8e2ffdca45f856621c3703f44"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.241379 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghblb" event={"ID":"ca236cfc-51d0-4d79-b90c-ddac400b4dbb","Type":"ContainerStarted","Data":"e8aeb7923ac83033c9a62d8ea8d64c7240a8bb3aff16808268712db72259da0c"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.243794 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" event={"ID":"466718f1-f118-4f13-a983-14060aef09e6","Type":"ContainerStarted","Data":"aced264e1a5d786ba81e1dfd63c009187b1be53d0ee6ad4e1170392825ce2f8a"} Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.300212 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:00 crc kubenswrapper[4751]: E0131 14:44:00.301148 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:00.801132577 +0000 UTC m=+145.175845462 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.402241 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:00 crc kubenswrapper[4751]: E0131 14:44:00.403105 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:00.903089898 +0000 UTC m=+145.277802783 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.426863 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n"] Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.451625 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86"] Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.504000 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:00 crc kubenswrapper[4751]: E0131 14:44:00.504353 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:01.004339322 +0000 UTC m=+145.379052207 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.606138 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:00 crc kubenswrapper[4751]: E0131 14:44:00.606472 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:01.106460339 +0000 UTC m=+145.481173224 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.700838 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5r6kv"] Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.707458 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:00 crc kubenswrapper[4751]: E0131 14:44:00.707830 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:01.207816215 +0000 UTC m=+145.582529100 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.718895 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-skzbg"] Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.720954 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-vbfvz"] Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.733885 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp"] Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.739841 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9"] Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.748750 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-4m7jl" podStartSLOduration=120.748735795 podStartE2EDuration="2m0.748735795s" podCreationTimestamp="2026-01-31 14:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:00.74623837 +0000 UTC m=+145.120951255" watchObservedRunningTime="2026-01-31 14:44:00.748735795 +0000 UTC m=+145.123448680" Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.784553 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2"] Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.793844 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-hgs4c" podStartSLOduration=119.793825696 podStartE2EDuration="1m59.793825696s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:00.788445974 +0000 UTC m=+145.163158849" watchObservedRunningTime="2026-01-31 14:44:00.793825696 +0000 UTC m=+145.168538581" Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.808725 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:00 crc kubenswrapper[4751]: E0131 14:44:00.809038 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:01.309026738 +0000 UTC m=+145.683739623 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.865691 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-s7gwp" podStartSLOduration=119.865673363 podStartE2EDuration="1m59.865673363s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:00.840059577 +0000 UTC m=+145.214772462" watchObservedRunningTime="2026-01-31 14:44:00.865673363 +0000 UTC m=+145.240386248" Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.867251 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-b44gm"] Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.867671 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-qdsgb"] Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.871229 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6"] Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.871262 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-x4rnh"] Jan 31 14:44:00 crc kubenswrapper[4751]: W0131 14:44:00.874968 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod89a244ab_c405_48aa_893f_f50995384ede.slice/crio-cf6795000c69e913a437a37284db30d46d068223ad9c3aaaf739a528bd1f8eab WatchSource:0}: Error finding container cf6795000c69e913a437a37284db30d46d068223ad9c3aaaf739a528bd1f8eab: Status 404 returned error can't find the container with id cf6795000c69e913a437a37284db30d46d068223ad9c3aaaf739a528bd1f8eab Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.910759 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:00 crc kubenswrapper[4751]: E0131 14:44:00.911325 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:01.411310768 +0000 UTC m=+145.786023653 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.935939 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk"] Jan 31 14:44:00 crc kubenswrapper[4751]: W0131 14:44:00.985887 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb17c8e83_275b_4777_946a_c7360ad8fa48.slice/crio-af4f3abc4a3b62a128d16eb2c611b54e459447c23307822acee137a849594c11 WatchSource:0}: Error finding container af4f3abc4a3b62a128d16eb2c611b54e459447c23307822acee137a849594c11: Status 404 returned error can't find the container with id af4f3abc4a3b62a128d16eb2c611b54e459447c23307822acee137a849594c11 Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.996902 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" podStartSLOduration=119.996882888 podStartE2EDuration="1m59.996882888s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:00.959339907 +0000 UTC m=+145.334052792" watchObservedRunningTime="2026-01-31 14:44:00.996882888 +0000 UTC m=+145.371595773" Jan 31 14:44:00 crc kubenswrapper[4751]: I0131 14:44:00.997303 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-pmglg" podStartSLOduration=120.997297539 podStartE2EDuration="2m0.997297539s" podCreationTimestamp="2026-01-31 14:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:00.996104067 +0000 UTC m=+145.370816942" watchObservedRunningTime="2026-01-31 14:44:00.997297539 +0000 UTC m=+145.372010414" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.012720 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:01 crc kubenswrapper[4751]: E0131 14:44:01.013058 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:01.513047385 +0000 UTC m=+145.887760270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:01 crc kubenswrapper[4751]: W0131 14:44:01.050290 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd64a8f76_87cc_45eb_bc92_5802a3db6c3d.slice/crio-9394bf1aac35689cd86cf663972ec5310ddc561b1fc7c22e8b46fa9b64fffc79 WatchSource:0}: Error finding container 9394bf1aac35689cd86cf663972ec5310ddc561b1fc7c22e8b46fa9b64fffc79: Status 404 returned error can't find the container with id 9394bf1aac35689cd86cf663972ec5310ddc561b1fc7c22e8b46fa9b64fffc79 Jan 31 14:44:01 crc kubenswrapper[4751]: W0131 14:44:01.080469 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7014a649_2d58_4772_9eb3_697e4b925923.slice/crio-76846dec47e5b4970ff32470e578446f70d52dac43c1c3d5677e971c56d13a0f WatchSource:0}: Error finding container 76846dec47e5b4970ff32470e578446f70d52dac43c1c3d5677e971c56d13a0f: Status 404 returned error can't find the container with id 76846dec47e5b4970ff32470e578446f70d52dac43c1c3d5677e971c56d13a0f Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.124064 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:01 crc kubenswrapper[4751]: E0131 14:44:01.124227 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:01.62420718 +0000 UTC m=+145.998920065 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.124384 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:01 crc kubenswrapper[4751]: E0131 14:44:01.124827 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:01.624818906 +0000 UTC m=+145.999531791 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.224953 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:01 crc kubenswrapper[4751]: E0131 14:44:01.225311 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:01.72529562 +0000 UTC m=+146.100008505 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.225608 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:01 crc kubenswrapper[4751]: E0131 14:44:01.225861 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:01.725853164 +0000 UTC m=+146.100566049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.250983 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vbfvz" event={"ID":"cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4","Type":"ContainerStarted","Data":"3bf0bb5c75e0d3e46cdfd98f66f2db57aa365840a1943b8b5415bf165d392066"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.251731 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qdsgb" event={"ID":"e999b5a4-2e54-4195-98fa-4c5fa36f1b3a","Type":"ContainerStarted","Data":"4574660b2ec0b8ca54c33feb5110d2f650949aa790d87c7c7eb817bea8cc9a00"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.255008 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-db5pg" event={"ID":"6a74f65d-f8d2-41af-8469-6f8d020b41de","Type":"ContainerStarted","Data":"59249b5db3559d44ce1d49fd55799f7f3bcc4d0ede7839ca802f94b7dd5b3b94"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.256268 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" event={"ID":"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea","Type":"ContainerStarted","Data":"8d8b4a1528af48d18db181db8a7bebc79bb86f32aba8601a554e74b7bcaef05b"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.270981 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-db5pg" podStartSLOduration=121.270962386 podStartE2EDuration="2m1.270962386s" podCreationTimestamp="2026-01-31 14:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:01.270722209 +0000 UTC m=+145.645435094" watchObservedRunningTime="2026-01-31 14:44:01.270962386 +0000 UTC m=+145.645675271" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.271109 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" event={"ID":"e14d9fb0-f377-4331-8bc1-8f4017bb95a3","Type":"ContainerStarted","Data":"20879949a09c49e31dedd607f1479e96ea74daf3c078632d9eeaf5e4b3b68d85"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.284151 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" event={"ID":"9edad05e-bd87-4a20-a947-6b09f9f7c93a","Type":"ContainerStarted","Data":"c82f7bca9875a2ef35eb3f4f5fbad8acfc64f20ad382ddc3364970333da29ca0"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.290528 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m" event={"ID":"17d7ae01-24ad-448d-ae7c-10df353833f4","Type":"ContainerStarted","Data":"52542f554222fd78c4824c2265e7cc50a2c69472e6e2676a61d7a7affa119cfc"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.297361 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk" event={"ID":"d031fa1b-4d52-47d7-8c39-5fa21fb6c244","Type":"ContainerStarted","Data":"795803bb88a0e7f1bbf4e9896ee665ce49d04a6cdb547135406940042ea76f72"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.305613 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr" event={"ID":"f13811e7-14eb-4a17-90a1-345619f9fb29","Type":"ContainerStarted","Data":"903e9ce05a25a0620d64da0a5d11a7b3416512e6fbe058d2e17fdc2b278a385a"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.307647 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-cp47m" podStartSLOduration=120.307614453 podStartE2EDuration="2m0.307614453s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:01.303495085 +0000 UTC m=+145.678207970" watchObservedRunningTime="2026-01-31 14:44:01.307614453 +0000 UTC m=+145.682327338" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.354616 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:01 crc kubenswrapper[4751]: E0131 14:44:01.355115 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:01.855023005 +0000 UTC m=+146.229735890 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.356256 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:01 crc kubenswrapper[4751]: E0131 14:44:01.357016 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:01.857003067 +0000 UTC m=+146.231715952 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.359858 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-czqdr" podStartSLOduration=120.359841912 podStartE2EDuration="2m0.359841912s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:01.357764428 +0000 UTC m=+145.732477313" watchObservedRunningTime="2026-01-31 14:44:01.359841912 +0000 UTC m=+145.734554797" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.364576 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-x4njd" event={"ID":"dc13f997-316e-4e81-a56e-0fa6e02d1502","Type":"ContainerStarted","Data":"dce6077c142973950a60f9709e36f8427a2008077ace6934308f24b16df27181"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.364611 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-x4njd" event={"ID":"dc13f997-316e-4e81-a56e-0fa6e02d1502","Type":"ContainerStarted","Data":"352db6cf586acb663510b8303b28633793e0196c8b93440629b7075d97e63518"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.369778 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-v8p8j" event={"ID":"b9810521-7440-49d4-bf04-7dbe3324cc5b","Type":"ContainerStarted","Data":"e2c726fff05f34e326d0ee1987f7f7c60d6d26af5f8fe6fa7c97fa815a3317d7"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.380430 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-x4njd" podStartSLOduration=6.380415966 podStartE2EDuration="6.380415966s" podCreationTimestamp="2026-01-31 14:43:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:01.377139179 +0000 UTC m=+145.751852064" watchObservedRunningTime="2026-01-31 14:44:01.380415966 +0000 UTC m=+145.755128851" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.401448 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-h262z" event={"ID":"5caeb3dc-2a42-41b5-ac91-c1c8a216fb43","Type":"ContainerStarted","Data":"4ca9486c0c475df9706c69a8042a91ffb75ee83dea5dea8944cb0a263025ee01"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.429904 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" event={"ID":"aceeef0f-cb36-43d6-8e09-35949fe73911","Type":"ContainerStarted","Data":"4baa46c448affe4dd56663a4e343cbc7c6fcd34dd03a41b5df88bb7dcb0741d8"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.437030 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-h262z" podStartSLOduration=121.43700477 podStartE2EDuration="2m1.43700477s" podCreationTimestamp="2026-01-31 14:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:01.436710622 +0000 UTC m=+145.811423507" watchObservedRunningTime="2026-01-31 14:44:01.43700477 +0000 UTC m=+145.811717655" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.447763 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n" event={"ID":"4c4b193a-a01b-440a-a94a-55c4b5f06586","Type":"ContainerStarted","Data":"08256894410bac7c47817ab162acd2e447ad5d18b741d1c9eb8858f122bb60c1"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.460490 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-w4lzx" podStartSLOduration=120.4604697 podStartE2EDuration="2m0.4604697s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:01.453689961 +0000 UTC m=+145.828402846" watchObservedRunningTime="2026-01-31 14:44:01.4604697 +0000 UTC m=+145.835182585" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.461304 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:01 crc kubenswrapper[4751]: E0131 14:44:01.461937 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:01.961904878 +0000 UTC m=+146.336617763 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.464989 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" event={"ID":"466718f1-f118-4f13-a983-14060aef09e6","Type":"ContainerStarted","Data":"b09e25f06983841ed3cf3749bd8fc6428edcabe64175c718c26a79d1849563e7"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.469876 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" event={"ID":"bcd7a932-6db9-4cca-b619-852242324725","Type":"ContainerStarted","Data":"d2d4f70fe3b8dde349f58413464d1d27ca8ea76a8246d3c31d2f88d50401e6c2"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.475349 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghblb" event={"ID":"ca236cfc-51d0-4d79-b90c-ddac400b4dbb","Type":"ContainerStarted","Data":"f69800f221f673da0026a0fae7aed9cdbc49c7f5de5433b68fb8c0b3dcc1115b"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.477856 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-drp8h" podStartSLOduration=120.477841848 podStartE2EDuration="2m0.477841848s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:01.477402117 +0000 UTC m=+145.852115002" watchObservedRunningTime="2026-01-31 14:44:01.477841848 +0000 UTC m=+145.852554733" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.478053 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk" event={"ID":"7014a649-2d58-4772-9eb3-697e4b925923","Type":"ContainerStarted","Data":"76846dec47e5b4970ff32470e578446f70d52dac43c1c3d5677e971c56d13a0f"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.497222 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" event={"ID":"802d5225-ef3f-485c-bb85-3c0f18e42952","Type":"ContainerStarted","Data":"01af9b04a121e47de6d720ef96908370b377b2bf6ed16ab772bd8cea30c24502"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.498061 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.502430 4751 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-xr2gt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.502468 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" podUID="802d5225-ef3f-485c-bb85-3c0f18e42952" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.508842 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" event={"ID":"d64a8f76-87cc-45eb-bc92-5802a3db6c3d","Type":"ContainerStarted","Data":"9394bf1aac35689cd86cf663972ec5310ddc561b1fc7c22e8b46fa9b64fffc79"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.512337 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" event={"ID":"89a244ab-c405-48aa-893f-f50995384ede","Type":"ContainerStarted","Data":"cf6795000c69e913a437a37284db30d46d068223ad9c3aaaf739a528bd1f8eab"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.519322 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-ghblb" podStartSLOduration=120.519303693 podStartE2EDuration="2m0.519303693s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:01.490506503 +0000 UTC m=+145.865219388" watchObservedRunningTime="2026-01-31 14:44:01.519303693 +0000 UTC m=+145.894016578" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.521918 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b44gm" event={"ID":"b17c8e83-275b-4777-946a-c7360ad8fa48","Type":"ContainerStarted","Data":"af4f3abc4a3b62a128d16eb2c611b54e459447c23307822acee137a849594c11"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.526957 4751 generic.go:334] "Generic (PLEG): container finished" podID="89314349-bbc8-4886-b93b-51358e4e71b0" containerID="5eb306a96af5104746662256225468d9f5c6caad943f48faa1ea6569b8191d66" exitCode=0 Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.527057 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" event={"ID":"89314349-bbc8-4886-b93b-51358e4e71b0","Type":"ContainerDied","Data":"5eb306a96af5104746662256225468d9f5c6caad943f48faa1ea6569b8191d66"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.533710 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv" event={"ID":"2ad3db81-4cb9-49a5-b4e0-55b546996fa0","Type":"ContainerStarted","Data":"f3b82becccb0b3b0a7ccc90a858d15265859ba0425fe811f0ea63bd98090d636"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.533776 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv" event={"ID":"2ad3db81-4cb9-49a5-b4e0-55b546996fa0","Type":"ContainerStarted","Data":"48f9659ec28b8818f31ecc6ee3c28403c7bcd4830214a7011852e9664deee8e5"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.535956 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8fgxq" event={"ID":"4ba2ceb2-34e1-487c-9b13-0a480d6cc521","Type":"ContainerStarted","Data":"6899da3ff15b265325d63b73a135e1e7647ed2f68a975cd0a5b3abfc7c692122"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.542550 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" event={"ID":"0058e7f4-92db-444d-a979-2880c3f83442","Type":"ContainerStarted","Data":"32af7db5826454a8966a23c401bb0e748ca145604ab881bdd690506b73645f81"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.546805 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8m7f4" event={"ID":"853ca050-beae-4089-a5df-9556eeda508b","Type":"ContainerStarted","Data":"f8a8687cb73c8ea914386022ed953012a318696a73bafd13ca6351ecf6baedb0"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.560267 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-4gqrl" podStartSLOduration=120.560249074 podStartE2EDuration="2m0.560249074s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:01.528948748 +0000 UTC m=+145.903661633" watchObservedRunningTime="2026-01-31 14:44:01.560249074 +0000 UTC m=+145.934961959" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.561630 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" podStartSLOduration=121.56160889 podStartE2EDuration="2m1.56160889s" podCreationTimestamp="2026-01-31 14:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:01.556623049 +0000 UTC m=+145.931335934" watchObservedRunningTime="2026-01-31 14:44:01.56160889 +0000 UTC m=+145.936321765" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.563593 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:01 crc kubenswrapper[4751]: E0131 14:44:01.566895 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:02.06687985 +0000 UTC m=+146.441592745 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.585388 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" event={"ID":"9f99779e-5e77-4b5c-8886-7accebe8a897","Type":"ContainerStarted","Data":"4040eb76ab95246c66d1300eb50d9756c93fadedb6446beab730cd9f803ccec1"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.607290 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-xqgfv" podStartSLOduration=120.607271216 podStartE2EDuration="2m0.607271216s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:01.607188214 +0000 UTC m=+145.981901089" watchObservedRunningTime="2026-01-31 14:44:01.607271216 +0000 UTC m=+145.981984101" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.621631 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" event={"ID":"84e2930a-5ae3-4171-a3dd-e5eea62ef157","Type":"ContainerStarted","Data":"8e402889398f0b5d93bacd46f42378e3cdc7f2ee478995578d04804d8ec0f029"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.622923 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.624823 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" event={"ID":"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1","Type":"ContainerStarted","Data":"7abf360dd9c0e2e95e4396aa0bbb3d62d9791b073cdcaa6e42b4b4a4c5cef71e"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.630367 4751 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7762w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.630442 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" podUID="84e2930a-5ae3-4171-a3dd-e5eea62ef157" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.648518 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" podStartSLOduration=120.648500335 podStartE2EDuration="2m0.648500335s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:01.647530229 +0000 UTC m=+146.022243134" watchObservedRunningTime="2026-01-31 14:44:01.648500335 +0000 UTC m=+146.023213220" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.652276 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-skzbg" event={"ID":"f6be9bbf-6799-45e0-8d53-790a5484f3a4","Type":"ContainerStarted","Data":"0c2cf46331a59d6d049bbbbf297f3bf3ef89c230accfb3d1d65334c10776daf5"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.668590 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.669710 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" event={"ID":"eade01dc-846b-42a8-a6ed-8cf0a0663e82","Type":"ContainerStarted","Data":"9d44648df839910022878a08450dec667db28fe365908b86584da87c8884b401"} Jan 31 14:44:01 crc kubenswrapper[4751]: E0131 14:44:01.670164 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:02.170133046 +0000 UTC m=+146.544846011 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.683973 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h4drr" event={"ID":"5c630253-f658-44fb-891d-f560f1e2b577","Type":"ContainerStarted","Data":"37dea14bb855fc0c21bc1fd1a3441e623824acc1c0c1865c768e36090fc733e6"} Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.684063 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.684160 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-4m7jl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.684191 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4m7jl" podUID="d723501b-bb29-4d60-ad97-239eb749771f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.690537 4751 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-sxjf5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.690578 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" podUID="c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.700176 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h4drr" podStartSLOduration=120.700161539 podStartE2EDuration="2m0.700161539s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:01.700141078 +0000 UTC m=+146.074853963" watchObservedRunningTime="2026-01-31 14:44:01.700161539 +0000 UTC m=+146.074874424" Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.769786 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:01 crc kubenswrapper[4751]: E0131 14:44:01.770449 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:02.270433345 +0000 UTC m=+146.645146230 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.870560 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:01 crc kubenswrapper[4751]: E0131 14:44:01.870695 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:02.370675792 +0000 UTC m=+146.745388677 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.871087 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:01 crc kubenswrapper[4751]: E0131 14:44:01.873643 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:02.3736349 +0000 UTC m=+146.748347785 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.971638 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:01 crc kubenswrapper[4751]: E0131 14:44:01.971835 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:02.471808102 +0000 UTC m=+146.846520987 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:01 crc kubenswrapper[4751]: I0131 14:44:01.971969 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:01 crc kubenswrapper[4751]: E0131 14:44:01.972333 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:02.472323046 +0000 UTC m=+146.847036011 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.072705 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:02 crc kubenswrapper[4751]: E0131 14:44:02.073244 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:02.57322624 +0000 UTC m=+146.947939125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.182423 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:02 crc kubenswrapper[4751]: E0131 14:44:02.185362 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:02.685347321 +0000 UTC m=+147.060060206 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.283791 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:02 crc kubenswrapper[4751]: E0131 14:44:02.284434 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:02.784412247 +0000 UTC m=+147.159125132 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.388165 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:02 crc kubenswrapper[4751]: E0131 14:44:02.388497 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:02.888485635 +0000 UTC m=+147.263198520 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.488972 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:02 crc kubenswrapper[4751]: E0131 14:44:02.489162 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:02.989136853 +0000 UTC m=+147.363849738 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.489357 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:02 crc kubenswrapper[4751]: E0131 14:44:02.489604 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:02.989597325 +0000 UTC m=+147.364310210 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.590149 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:02 crc kubenswrapper[4751]: E0131 14:44:02.590343 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:03.090317825 +0000 UTC m=+147.465030710 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.590617 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:02 crc kubenswrapper[4751]: E0131 14:44:02.590908 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:03.09090021 +0000 UTC m=+147.465613095 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.631567 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.689870 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" event={"ID":"9edad05e-bd87-4a20-a947-6b09f9f7c93a","Type":"ContainerStarted","Data":"6e56ec891c20edecfef021496eccccd07734ce2f1e1a464798c1434bf7865d77"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.690303 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.691160 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:02 crc kubenswrapper[4751]: E0131 14:44:02.691301 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:03.191277341 +0000 UTC m=+147.565990236 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.691492 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.691903 4751 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-vc9q2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.691950 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" podUID="9edad05e-bd87-4a20-a947-6b09f9f7c93a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 31 14:44:02 crc kubenswrapper[4751]: E0131 14:44:02.692446 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:03.192431071 +0000 UTC m=+147.567144066 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.692558 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" event={"ID":"5f75ab4e-45c1-4ed9-b966-afa91dbc88a6","Type":"ContainerStarted","Data":"0f296d886a4e86c9fb7c0821d1aba553311a118eb2a4d1ed0c4ac96370b82aa8"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.694673 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk" event={"ID":"d031fa1b-4d52-47d7-8c39-5fa21fb6c244","Type":"ContainerStarted","Data":"ecf1d59d03b76e02faf85fdf6e2372715e317b2d34e7cad119bde4411e19b28f"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.696314 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-8fgxq" event={"ID":"4ba2ceb2-34e1-487c-9b13-0a480d6cc521","Type":"ContainerStarted","Data":"4329bd44507bea908031067a5de7d34eed01214163138e6bbd8c31f760de6fe5"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.698271 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-5hn9b" event={"ID":"01ff1674-4e01-4cdc-aea3-1e91a6a389e3","Type":"ContainerStarted","Data":"0598de1264a51e543dcd743ae65f25862fb214b642229772b95f9286721b9b77"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.699659 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv" event={"ID":"f47a4e08-e21f-4a13-9ea2-bc1545a64cae","Type":"ContainerStarted","Data":"1aad2b6d72ce6c851e49ad7275df69908b14d74b5b348519616ddd15626c6128"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.701946 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" event={"ID":"d64a8f76-87cc-45eb-bc92-5802a3db6c3d","Type":"ContainerStarted","Data":"8f80be1b4f04566c437fc0b68bc723eae3982f173d1303617673bfaa62f1f3dc"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.704559 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b44gm" event={"ID":"b17c8e83-275b-4777-946a-c7360ad8fa48","Type":"ContainerStarted","Data":"2e231a1fa2f3ae6e3e93b3c4f014cf4b18f0455ad1eb05c727fc37de2d1bd364"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.704605 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b44gm" event={"ID":"b17c8e83-275b-4777-946a-c7360ad8fa48","Type":"ContainerStarted","Data":"4f43fdfe7720646fe62898497506d1efbe37a0c991a885a83b74fbd4b8132c74"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.705046 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" podStartSLOduration=121.705035164 podStartE2EDuration="2m1.705035164s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:02.703552615 +0000 UTC m=+147.078265500" watchObservedRunningTime="2026-01-31 14:44:02.705035164 +0000 UTC m=+147.079748049" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.706565 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8m7f4" event={"ID":"853ca050-beae-4089-a5df-9556eeda508b","Type":"ContainerStarted","Data":"a52fabbd60d21daf58e9d10f46243c241d1f2ed6b3424cd226da30bbcac9aefb"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.706593 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8m7f4" event={"ID":"853ca050-beae-4089-a5df-9556eeda508b","Type":"ContainerStarted","Data":"ce1be442c938a3a303c31bde3f68208f03c465b14cc789c88b94bb5029c12adc"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.708196 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" event={"ID":"9f99779e-5e77-4b5c-8886-7accebe8a897","Type":"ContainerStarted","Data":"2886b4beec8fbe46944eef93fa0ce15d7bd64528ac55c6bd65531ed442dea8af"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.708433 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.709738 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" event={"ID":"89a244ab-c405-48aa-893f-f50995384ede","Type":"ContainerStarted","Data":"81a09aec382eab3d1121a3bdc5e760cd357ff1f2ae90a816828e7967447d1045"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.710943 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.711041 4751 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7hc86 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.711097 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" podUID="9f99779e-5e77-4b5c-8886-7accebe8a897" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.711934 4751 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7hjp9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.711978 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" podUID="89a244ab-c405-48aa-893f-f50995384ede" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.713972 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" event={"ID":"89314349-bbc8-4886-b93b-51358e4e71b0","Type":"ContainerStarted","Data":"466860ab81933ff0ead4c18a5a020887a870782a5543ba0f4d4b546fa85314fb"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.714031 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" event={"ID":"89314349-bbc8-4886-b93b-51358e4e71b0","Type":"ContainerStarted","Data":"83302bf2ea2d2fb8158b63ecb177ddf0ca75d838e791dc4168c4f281ff0d4cdf"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.717874 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk" event={"ID":"7014a649-2d58-4772-9eb3-697e4b925923","Type":"ContainerStarted","Data":"5fa53bf5deac84b0ced844b48b88bff7503d29ce0ca7b87ae125cb33f9057c7c"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.717924 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk" event={"ID":"7014a649-2d58-4772-9eb3-697e4b925923","Type":"ContainerStarted","Data":"6ba7af36aec1573d4d7f1b398d8fff0c8050ad6797e5d647b129bab4a92b19e3"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.718048 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.721946 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-skzbg" event={"ID":"f6be9bbf-6799-45e0-8d53-790a5484f3a4","Type":"ContainerStarted","Data":"1992e3beb7df5f31535ea048340397d93c80b4751327faf367960d45b1b123f4"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.721987 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-skzbg" event={"ID":"f6be9bbf-6799-45e0-8d53-790a5484f3a4","Type":"ContainerStarted","Data":"aff3281f7d4db094a665eb4271557a6c71c2f13307ba221302e7069d8acf2fab"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.722552 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-skzbg" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.724890 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" event={"ID":"0058e7f4-92db-444d-a979-2880c3f83442","Type":"ContainerStarted","Data":"2f491d8d63860ba98caece552ebaff2be1c8f903cbea1402c89ae7f9679ec278"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.728681 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n" event={"ID":"4c4b193a-a01b-440a-a94a-55c4b5f06586","Type":"ContainerStarted","Data":"99d061d2fb49bff96c704106d2ba56cc3727d17965227544725d3610de44bde3"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.730552 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-vbfvz" event={"ID":"cc8ec6f8-52f3-4bb8-a00b-4f73276a3af4","Type":"ContainerStarted","Data":"023362aaa08846f80ba54c770dd5238cd4255165bfb25a5345e3ce4931729bb2"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.732025 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" event={"ID":"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea","Type":"ContainerStarted","Data":"cc163d448fa8fad6b5ab0077c0960c4003a53c503f6d097090f206fed6245a22"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.732622 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.734055 4751 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5r6kv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.734114 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" podUID="8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.734444 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-qdsgb" event={"ID":"e999b5a4-2e54-4195-98fa-4c5fa36f1b3a","Type":"ContainerStarted","Data":"8461527ee3273a8a1474a8ffc7a818ace12f0233df1abd1b2f654fcad45ffcc4"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.736296 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" event={"ID":"eade01dc-846b-42a8-a6ed-8cf0a0663e82","Type":"ContainerStarted","Data":"9d26a6d6092efc3cfe1b53bda2539e32fc75d0f27a288ecda4b2062254a0fc73"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.736543 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" podStartSLOduration=121.736527136 podStartE2EDuration="2m1.736527136s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:02.735259722 +0000 UTC m=+147.109972607" watchObservedRunningTime="2026-01-31 14:44:02.736527136 +0000 UTC m=+147.111240021" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.738567 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-v8p8j" event={"ID":"b9810521-7440-49d4-bf04-7dbe3324cc5b","Type":"ContainerStarted","Data":"14415e9473becce338fe33d29144e768b2735a0df0f1ed6016935ce3baae1250"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.741677 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-h4drr" event={"ID":"5c630253-f658-44fb-891d-f560f1e2b577","Type":"ContainerStarted","Data":"a83294cf387b6e17c78fb2bd16cec0b52e7bd1aa28deeafb800ae11bb2f42f2f"} Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.742510 4751 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7762w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" start-of-body= Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.742539 4751 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-xr2gt container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" start-of-body= Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.742563 4751 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-sxjf5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" start-of-body= Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.742575 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" podUID="802d5225-ef3f-485c-bb85-3c0f18e42952" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.16:6443/healthz\": dial tcp 10.217.0.16:6443: connect: connection refused" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.742593 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" podUID="c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": dial tcp 10.217.0.5:8443: connect: connection refused" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.742555 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" podUID="84e2930a-5ae3-4171-a3dd-e5eea62ef157" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": dial tcp 10.217.0.10:8443: connect: connection refused" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.744552 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.744581 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.744913 4751 patch_prober.go:28] interesting pod/console-operator-58897d9998-db5pg container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.744960 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-db5pg" podUID="6a74f65d-f8d2-41af-8469-6f8d020b41de" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.9:8443/readyz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.792964 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.794650 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-xv2tk" podStartSLOduration=121.79463675 podStartE2EDuration="2m1.79463675s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:02.770628736 +0000 UTC m=+147.145341621" watchObservedRunningTime="2026-01-31 14:44:02.79463675 +0000 UTC m=+147.169349635" Jan 31 14:44:02 crc kubenswrapper[4751]: E0131 14:44:02.795835 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:03.295810661 +0000 UTC m=+147.670523636 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.797976 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.802389 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-v4px6" podStartSLOduration=121.802381524 podStartE2EDuration="2m1.802381524s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:02.795540014 +0000 UTC m=+147.170252899" watchObservedRunningTime="2026-01-31 14:44:02.802381524 +0000 UTC m=+147.177094399" Jan 31 14:44:02 crc kubenswrapper[4751]: E0131 14:44:02.802588 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:03.302571069 +0000 UTC m=+147.677283954 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.822037 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-8fgxq" podStartSLOduration=121.822021763 podStartE2EDuration="2m1.822021763s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:02.820471562 +0000 UTC m=+147.195184457" watchObservedRunningTime="2026-01-31 14:44:02.822021763 +0000 UTC m=+147.196734648" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.865990 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.867776 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" start-of-body= Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.867821 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="Get \"http://localhost:1936/healthz/ready\": dial tcp [::1]:1936: connect: connection refused" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.868408 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-l76jv" podStartSLOduration=121.868399678 podStartE2EDuration="2m1.868399678s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:02.867775701 +0000 UTC m=+147.242488586" watchObservedRunningTime="2026-01-31 14:44:02.868399678 +0000 UTC m=+147.243112553" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.869920 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-5hn9b" podStartSLOduration=121.869915198 podStartE2EDuration="2m1.869915198s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:02.849108628 +0000 UTC m=+147.223821503" watchObservedRunningTime="2026-01-31 14:44:02.869915198 +0000 UTC m=+147.244628083" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.882460 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" podStartSLOduration=121.882438218 podStartE2EDuration="2m1.882438218s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:02.881634027 +0000 UTC m=+147.256346912" watchObservedRunningTime="2026-01-31 14:44:02.882438218 +0000 UTC m=+147.257151103" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.898221 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-vbfvz" podStartSLOduration=121.898203275 podStartE2EDuration="2m1.898203275s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:02.898138643 +0000 UTC m=+147.272851528" watchObservedRunningTime="2026-01-31 14:44:02.898203275 +0000 UTC m=+147.272916160" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.911743 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:02 crc kubenswrapper[4751]: E0131 14:44:02.912211 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:03.412183594 +0000 UTC m=+147.786896539 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.917962 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-8m7f4" podStartSLOduration=122.917941386 podStartE2EDuration="2m2.917941386s" podCreationTimestamp="2026-01-31 14:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:02.915365818 +0000 UTC m=+147.290078703" watchObservedRunningTime="2026-01-31 14:44:02.917941386 +0000 UTC m=+147.292654271" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.961006 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-nz99n" podStartSLOduration=122.960988293 podStartE2EDuration="2m2.960988293s" podCreationTimestamp="2026-01-31 14:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:02.960062918 +0000 UTC m=+147.334775803" watchObservedRunningTime="2026-01-31 14:44:02.960988293 +0000 UTC m=+147.335701178" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.961876 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-cc6m2" podStartSLOduration=122.961871286 podStartE2EDuration="2m2.961871286s" podCreationTimestamp="2026-01-31 14:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:02.941017885 +0000 UTC m=+147.315730770" watchObservedRunningTime="2026-01-31 14:44:02.961871286 +0000 UTC m=+147.336584161" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.983306 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" podStartSLOduration=122.983288802 podStartE2EDuration="2m2.983288802s" podCreationTimestamp="2026-01-31 14:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:02.983011134 +0000 UTC m=+147.357724019" watchObservedRunningTime="2026-01-31 14:44:02.983288802 +0000 UTC m=+147.358001687" Jan 31 14:44:02 crc kubenswrapper[4751]: I0131 14:44:02.999101 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-v8p8j" podStartSLOduration=121.999087119 podStartE2EDuration="2m1.999087119s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:02.997883557 +0000 UTC m=+147.372596442" watchObservedRunningTime="2026-01-31 14:44:02.999087119 +0000 UTC m=+147.373800004" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.013013 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.013310 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:03.513299774 +0000 UTC m=+147.888012659 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.029181 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.029499 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.030823 4751 patch_prober.go:28] interesting pod/apiserver-7bbb656c7d-wdsj4 container/oauth-apiserver namespace/openshift-oauth-apiserver: Startup probe status=failure output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.030856 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" podUID="5f75ab4e-45c1-4ed9-b966-afa91dbc88a6" containerName="oauth-apiserver" probeResult="failure" output="Get \"https://10.217.0.6:8443/livez\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.042055 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-skzbg" podStartSLOduration=8.042032023 podStartE2EDuration="8.042032023s" podCreationTimestamp="2026-01-31 14:43:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:03.039811484 +0000 UTC m=+147.414524369" watchObservedRunningTime="2026-01-31 14:44:03.042032023 +0000 UTC m=+147.416744908" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.042326 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" podStartSLOduration=122.04232149 podStartE2EDuration="2m2.04232149s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:03.022560029 +0000 UTC m=+147.397272914" watchObservedRunningTime="2026-01-31 14:44:03.04232149 +0000 UTC m=+147.417034365" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.059311 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" podStartSLOduration=122.059295649 podStartE2EDuration="2m2.059295649s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:03.058660162 +0000 UTC m=+147.433373047" watchObservedRunningTime="2026-01-31 14:44:03.059295649 +0000 UTC m=+147.434008534" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.089099 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" podStartSLOduration=123.089085035 podStartE2EDuration="2m3.089085035s" podCreationTimestamp="2026-01-31 14:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:03.088183721 +0000 UTC m=+147.462896606" watchObservedRunningTime="2026-01-31 14:44:03.089085035 +0000 UTC m=+147.463797920" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.114177 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.114488 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:03.614474786 +0000 UTC m=+147.989187671 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.128462 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk" podStartSLOduration=122.128448665 podStartE2EDuration="2m2.128448665s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:03.108135408 +0000 UTC m=+147.482848293" watchObservedRunningTime="2026-01-31 14:44:03.128448665 +0000 UTC m=+147.503161540" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.128612 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" podStartSLOduration=123.128609939 podStartE2EDuration="2m3.128609939s" podCreationTimestamp="2026-01-31 14:42:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:03.126680728 +0000 UTC m=+147.501393613" watchObservedRunningTime="2026-01-31 14:44:03.128609939 +0000 UTC m=+147.503322824" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.153803 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-qdsgb" podStartSLOduration=7.153791354 podStartE2EDuration="7.153791354s" podCreationTimestamp="2026-01-31 14:43:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:03.13962869 +0000 UTC m=+147.514341575" watchObservedRunningTime="2026-01-31 14:44:03.153791354 +0000 UTC m=+147.528504239" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.215653 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.216037 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:03.716019967 +0000 UTC m=+148.090732852 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.317204 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.317385 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:03.817359943 +0000 UTC m=+148.192072828 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.317488 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.317801 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:03.817793795 +0000 UTC m=+148.192506680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.418542 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.418732 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:03.918709359 +0000 UTC m=+148.293422244 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.418875 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.419191 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:03.919184962 +0000 UTC m=+148.293897847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.519805 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.519982 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.019959053 +0000 UTC m=+148.394671938 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.520269 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.520694 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.020677652 +0000 UTC m=+148.395390537 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.535586 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.535643 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.537385 4751 patch_prober.go:28] interesting pod/apiserver-76f77b778f-5f7jc container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" start-of-body= Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.537437 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" podUID="89314349-bbc8-4886-b93b-51358e4e71b0" containerName="openshift-apiserver" probeResult="failure" output="Get \"https://10.217.0.14:8443/livez\": dial tcp 10.217.0.14:8443: connect: connection refused" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.621104 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.621283 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.121258778 +0000 UTC m=+148.495971663 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.621519 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.621805 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.121796672 +0000 UTC m=+148.496509557 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.723007 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.723200 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.223176489 +0000 UTC m=+148.597889374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.723277 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.723568 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.223561499 +0000 UTC m=+148.598274384 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.746690 4751 patch_prober.go:28] interesting pod/olm-operator-6b444d44fb-7hc86 container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" start-of-body= Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.746736 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" podUID="9f99779e-5e77-4b5c-8886-7accebe8a897" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.36:8443/healthz\": dial tcp 10.217.0.36:8443: connect: connection refused" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.746774 4751 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5r6kv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.746817 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" podUID="8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.747087 4751 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-vc9q2 container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" start-of-body= Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.747135 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" podUID="9edad05e-bd87-4a20-a947-6b09f9f7c93a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.42:8443/healthz\": dial tcp 10.217.0.42:8443: connect: connection refused" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.748169 4751 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7hjp9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" start-of-body= Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.748196 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" podUID="89a244ab-c405-48aa-893f-f50995384ede" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": dial tcp 10.217.0.39:5443: connect: connection refused" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.752981 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.824500 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.824703 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.324667119 +0000 UTC m=+148.699380004 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.826146 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.828176 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.328159941 +0000 UTC m=+148.702872826 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.835529 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-b44gm" podStartSLOduration=122.835512245 podStartE2EDuration="2m2.835512245s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:03.162562435 +0000 UTC m=+147.537275320" watchObservedRunningTime="2026-01-31 14:44:03.835512245 +0000 UTC m=+148.210225120" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.874734 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:03 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:03 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:03 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.875114 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.931215 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.931384 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.431360106 +0000 UTC m=+148.806072991 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:03 crc kubenswrapper[4751]: I0131 14:44:03.931540 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:03 crc kubenswrapper[4751]: E0131 14:44:03.931839 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.431828268 +0000 UTC m=+148.806541153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.003186 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.032298 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.032463 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.532441435 +0000 UTC m=+148.907154320 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.032798 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.033207 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.533189354 +0000 UTC m=+148.907902239 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.075373 4751 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-z9dj7 container/openshift-config-operator namespace/openshift-config-operator: Readiness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.075614 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" podUID="e14d9fb0-f377-4331-8bc1-8f4017bb95a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.075388 4751 patch_prober.go:28] interesting pod/openshift-config-operator-7777fb866f-z9dj7 container/openshift-config-operator namespace/openshift-config-operator: Liveness probe status=failure output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" start-of-body= Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.075791 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" podUID="e14d9fb0-f377-4331-8bc1-8f4017bb95a3" containerName="openshift-config-operator" probeResult="failure" output="Get \"https://10.217.0.7:8443/healthz\": dial tcp 10.217.0.7:8443: connect: connection refused" Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.134193 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.134404 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.634381246 +0000 UTC m=+149.009094131 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.134487 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.134813 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.634801458 +0000 UTC m=+149.009514343 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.235603 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.235833 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.735802645 +0000 UTC m=+149.110515620 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.236478 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.236826 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.736817161 +0000 UTC m=+149.111530046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.338284 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.338431 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.838409354 +0000 UTC m=+149.213122229 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.338833 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.339160 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.839153034 +0000 UTC m=+149.213865909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.349266 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-db5pg" Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.439871 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.440200 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:04.940185242 +0000 UTC m=+149.314898127 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.540973 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.541419 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.041375124 +0000 UTC m=+149.416088009 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.642009 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.642217 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.142192226 +0000 UTC m=+149.516905111 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.642343 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.642693 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.142686619 +0000 UTC m=+149.517399504 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.743601 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.744055 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.244029955 +0000 UTC m=+149.618742840 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.751025 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" event={"ID":"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1","Type":"ContainerStarted","Data":"2ab0f0faa26018e025fa907c7d5674f1870c94097e91e92f032c5b5d999c96de"} Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.751991 4751 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-5r6kv container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" start-of-body= Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.752107 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" podUID="8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.32:8080/healthz\": dial tcp 10.217.0.32:8080: connect: connection refused" Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.845473 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.848762 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.34874934 +0000 UTC m=+149.723462225 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.868110 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:04 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:04 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:04 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.868163 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.946569 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.946721 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.446696947 +0000 UTC m=+149.821409832 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:04 crc kubenswrapper[4751]: I0131 14:44:04.946761 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:04 crc kubenswrapper[4751]: E0131 14:44:04.947047 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.447032936 +0000 UTC m=+149.821745821 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.048393 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:05 crc kubenswrapper[4751]: E0131 14:44:05.048567 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.548543696 +0000 UTC m=+149.923256581 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.049794 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:05 crc kubenswrapper[4751]: E0131 14:44:05.050132 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.550124948 +0000 UTC m=+149.924837833 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.150564 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:05 crc kubenswrapper[4751]: E0131 14:44:05.150761 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.650734055 +0000 UTC m=+150.025446940 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.150866 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:05 crc kubenswrapper[4751]: E0131 14:44:05.151161 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.651150186 +0000 UTC m=+150.025863071 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.164148 4751 csr.go:261] certificate signing request csr-47twz is approved, waiting to be issued Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.171125 4751 csr.go:257] certificate signing request csr-47twz is issued Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.251906 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:05 crc kubenswrapper[4751]: E0131 14:44:05.252272 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.752245705 +0000 UTC m=+150.126958590 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.353706 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.353749 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.353775 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.353800 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.353859 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:44:05 crc kubenswrapper[4751]: E0131 14:44:05.354537 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.854519266 +0000 UTC m=+150.229232151 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.356807 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.360031 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.360536 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.362610 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.454611 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:05 crc kubenswrapper[4751]: E0131 14:44:05.454767 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.954741502 +0000 UTC m=+150.329454387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.454911 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:05 crc kubenswrapper[4751]: E0131 14:44:05.455224 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:05.955217645 +0000 UTC m=+150.329930530 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.556022 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:05 crc kubenswrapper[4751]: E0131 14:44:05.556201 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:06.056176601 +0000 UTC m=+150.430889486 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.556244 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:05 crc kubenswrapper[4751]: E0131 14:44:05.556595 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:06.056565581 +0000 UTC m=+150.431278466 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.621883 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.630202 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.637491 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.657900 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:05 crc kubenswrapper[4751]: E0131 14:44:05.657996 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:06.157981539 +0000 UTC m=+150.532694424 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.658177 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:05 crc kubenswrapper[4751]: E0131 14:44:05.658426 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:06.158420161 +0000 UTC m=+150.533133036 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.709460 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-z9dj7" Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.751291 4751 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-7hjp9 container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.751331 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" podUID="89a244ab-c405-48aa-893f-f50995384ede" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.39:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.760301 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:05 crc kubenswrapper[4751]: E0131 14:44:05.760546 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:06.260533257 +0000 UTC m=+150.635246142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.861386 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:05 crc kubenswrapper[4751]: E0131 14:44:05.861900 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:06.361889563 +0000 UTC m=+150.736602438 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.867689 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:05 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:05 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:05 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.867748 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:05 crc kubenswrapper[4751]: I0131 14:44:05.964498 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:05 crc kubenswrapper[4751]: E0131 14:44:05.964837 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:06.464821041 +0000 UTC m=+150.839533916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.066773 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:06 crc kubenswrapper[4751]: E0131 14:44:06.067105 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:06.567094492 +0000 UTC m=+150.941807377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.095647 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-wcnsn"] Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.098583 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.103646 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.167652 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.168040 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/074619b7-9220-4377-b93d-6088199a5e16-catalog-content\") pod \"community-operators-wcnsn\" (UID: \"074619b7-9220-4377-b93d-6088199a5e16\") " pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.168109 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/074619b7-9220-4377-b93d-6088199a5e16-utilities\") pod \"community-operators-wcnsn\" (UID: \"074619b7-9220-4377-b93d-6088199a5e16\") " pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.168162 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pzp7l\" (UniqueName: \"kubernetes.io/projected/074619b7-9220-4377-b93d-6088199a5e16-kube-api-access-pzp7l\") pod \"community-operators-wcnsn\" (UID: \"074619b7-9220-4377-b93d-6088199a5e16\") " pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:44:06 crc kubenswrapper[4751]: E0131 14:44:06.168321 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:06.668304125 +0000 UTC m=+151.043017010 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.173153 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-31 14:39:05 +0000 UTC, rotation deadline is 2026-12-15 17:02:29.334903936 +0000 UTC Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.173194 4751 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7634h18m23.161712952s for next certificate rotation Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.199049 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wcnsn"] Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.270674 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pzp7l\" (UniqueName: \"kubernetes.io/projected/074619b7-9220-4377-b93d-6088199a5e16-kube-api-access-pzp7l\") pod \"community-operators-wcnsn\" (UID: \"074619b7-9220-4377-b93d-6088199a5e16\") " pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.270770 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.270807 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/074619b7-9220-4377-b93d-6088199a5e16-utilities\") pod \"community-operators-wcnsn\" (UID: \"074619b7-9220-4377-b93d-6088199a5e16\") " pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.270828 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/074619b7-9220-4377-b93d-6088199a5e16-catalog-content\") pod \"community-operators-wcnsn\" (UID: \"074619b7-9220-4377-b93d-6088199a5e16\") " pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.271306 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/074619b7-9220-4377-b93d-6088199a5e16-catalog-content\") pod \"community-operators-wcnsn\" (UID: \"074619b7-9220-4377-b93d-6088199a5e16\") " pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.278660 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/074619b7-9220-4377-b93d-6088199a5e16-utilities\") pod \"community-operators-wcnsn\" (UID: \"074619b7-9220-4377-b93d-6088199a5e16\") " pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:44:06 crc kubenswrapper[4751]: E0131 14:44:06.280370 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:06.780345063 +0000 UTC m=+151.155057948 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.299606 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-m4m6r"] Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.300577 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:44:06 crc kubenswrapper[4751]: W0131 14:44:06.302412 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fe485a1_e14f_4c09_b5b9_f252bc42b7e8.slice/crio-52c9d95b8b20694f3ac62508dc0d5d93be23cde43d59f9d6743b2c3adecdeb4f WatchSource:0}: Error finding container 52c9d95b8b20694f3ac62508dc0d5d93be23cde43d59f9d6743b2c3adecdeb4f: Status 404 returned error can't find the container with id 52c9d95b8b20694f3ac62508dc0d5d93be23cde43d59f9d6743b2c3adecdeb4f Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.306303 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.306937 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pzp7l\" (UniqueName: \"kubernetes.io/projected/074619b7-9220-4377-b93d-6088199a5e16-kube-api-access-pzp7l\") pod \"community-operators-wcnsn\" (UID: \"074619b7-9220-4377-b93d-6088199a5e16\") " pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.310970 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m4m6r"] Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.372505 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.372728 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5f1383-42d7-47a1-9e47-8dba038241d2-catalog-content\") pod \"certified-operators-m4m6r\" (UID: \"8d5f1383-42d7-47a1-9e47-8dba038241d2\") " pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.372773 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5f1383-42d7-47a1-9e47-8dba038241d2-utilities\") pod \"certified-operators-m4m6r\" (UID: \"8d5f1383-42d7-47a1-9e47-8dba038241d2\") " pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.372790 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-566b8\" (UniqueName: \"kubernetes.io/projected/8d5f1383-42d7-47a1-9e47-8dba038241d2-kube-api-access-566b8\") pod \"certified-operators-m4m6r\" (UID: \"8d5f1383-42d7-47a1-9e47-8dba038241d2\") " pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:44:06 crc kubenswrapper[4751]: E0131 14:44:06.372884 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:06.872868806 +0000 UTC m=+151.247581691 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.439238 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.473850 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5f1383-42d7-47a1-9e47-8dba038241d2-catalog-content\") pod \"certified-operators-m4m6r\" (UID: \"8d5f1383-42d7-47a1-9e47-8dba038241d2\") " pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.473905 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.473925 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5f1383-42d7-47a1-9e47-8dba038241d2-utilities\") pod \"certified-operators-m4m6r\" (UID: \"8d5f1383-42d7-47a1-9e47-8dba038241d2\") " pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.473940 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-566b8\" (UniqueName: \"kubernetes.io/projected/8d5f1383-42d7-47a1-9e47-8dba038241d2-kube-api-access-566b8\") pod \"certified-operators-m4m6r\" (UID: \"8d5f1383-42d7-47a1-9e47-8dba038241d2\") " pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.474344 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5f1383-42d7-47a1-9e47-8dba038241d2-catalog-content\") pod \"certified-operators-m4m6r\" (UID: \"8d5f1383-42d7-47a1-9e47-8dba038241d2\") " pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:44:06 crc kubenswrapper[4751]: E0131 14:44:06.474453 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:06.974443229 +0000 UTC m=+151.349156114 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.474632 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5f1383-42d7-47a1-9e47-8dba038241d2-utilities\") pod \"certified-operators-m4m6r\" (UID: \"8d5f1383-42d7-47a1-9e47-8dba038241d2\") " pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.476258 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ln2lx"] Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.477155 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.489875 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-566b8\" (UniqueName: \"kubernetes.io/projected/8d5f1383-42d7-47a1-9e47-8dba038241d2-kube-api-access-566b8\") pod \"certified-operators-m4m6r\" (UID: \"8d5f1383-42d7-47a1-9e47-8dba038241d2\") " pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.496796 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ln2lx"] Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.574659 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.574918 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c0f5c8-cecf-451f-abef-bf357716eb71-catalog-content\") pod \"community-operators-ln2lx\" (UID: \"d5c0f5c8-cecf-451f-abef-bf357716eb71\") " pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.574954 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xd7bp\" (UniqueName: \"kubernetes.io/projected/d5c0f5c8-cecf-451f-abef-bf357716eb71-kube-api-access-xd7bp\") pod \"community-operators-ln2lx\" (UID: \"d5c0f5c8-cecf-451f-abef-bf357716eb71\") " pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.574977 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c0f5c8-cecf-451f-abef-bf357716eb71-utilities\") pod \"community-operators-ln2lx\" (UID: \"d5c0f5c8-cecf-451f-abef-bf357716eb71\") " pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:44:06 crc kubenswrapper[4751]: E0131 14:44:06.575178 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:07.075162718 +0000 UTC m=+151.449875603 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.634680 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.671324 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2lq4t"] Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.672194 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.681650 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2lq4t"] Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.686797 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c0f5c8-cecf-451f-abef-bf357716eb71-catalog-content\") pod \"community-operators-ln2lx\" (UID: \"d5c0f5c8-cecf-451f-abef-bf357716eb71\") " pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.686834 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xd7bp\" (UniqueName: \"kubernetes.io/projected/d5c0f5c8-cecf-451f-abef-bf357716eb71-kube-api-access-xd7bp\") pod \"community-operators-ln2lx\" (UID: \"d5c0f5c8-cecf-451f-abef-bf357716eb71\") " pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.686857 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c0f5c8-cecf-451f-abef-bf357716eb71-utilities\") pod \"community-operators-ln2lx\" (UID: \"d5c0f5c8-cecf-451f-abef-bf357716eb71\") " pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.686907 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.687490 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c0f5c8-cecf-451f-abef-bf357716eb71-catalog-content\") pod \"community-operators-ln2lx\" (UID: \"d5c0f5c8-cecf-451f-abef-bf357716eb71\") " pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.687840 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c0f5c8-cecf-451f-abef-bf357716eb71-utilities\") pod \"community-operators-ln2lx\" (UID: \"d5c0f5c8-cecf-451f-abef-bf357716eb71\") " pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:44:06 crc kubenswrapper[4751]: E0131 14:44:06.688027 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:07.188014408 +0000 UTC m=+151.562727293 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.705917 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xd7bp\" (UniqueName: \"kubernetes.io/projected/d5c0f5c8-cecf-451f-abef-bf357716eb71-kube-api-access-xd7bp\") pod \"community-operators-ln2lx\" (UID: \"d5c0f5c8-cecf-451f-abef-bf357716eb71\") " pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.776517 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"70b37644cca0f337fc4dfd4871e75b89a2f42b4c243cf2cfc79c2e019ace9a46"} Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.777007 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"52c9d95b8b20694f3ac62508dc0d5d93be23cde43d59f9d6743b2c3adecdeb4f"} Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.779062 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"170a4113c11ddf109b05e9b2b2d59fdc24f80149db398271581c0e03098fdceb"} Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.779115 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"50465fb60971365621e3ff7ead311c712da1f8cd2c99229b9e0bc34c0650ce23"} Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.781705 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"c7abe610f55ef7baaa7e163f1517703c61608ff709ef5bf94be18db548949429"} Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.781766 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"07ec98dd31415f89b39b186e72ce385edc73781a64d2d2f5fcf1affce07c6f0c"} Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.782214 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.784735 4751 generic.go:334] "Generic (PLEG): container finished" podID="eade01dc-846b-42a8-a6ed-8cf0a0663e82" containerID="9d26a6d6092efc3cfe1b53bda2539e32fc75d0f27a288ecda4b2062254a0fc73" exitCode=0 Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.784778 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" event={"ID":"eade01dc-846b-42a8-a6ed-8cf0a0663e82","Type":"ContainerDied","Data":"9d26a6d6092efc3cfe1b53bda2539e32fc75d0f27a288ecda4b2062254a0fc73"} Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.788681 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.788933 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c447796d-48ac-4eeb-8fe6-ad411966b3d3-catalog-content\") pod \"certified-operators-2lq4t\" (UID: \"c447796d-48ac-4eeb-8fe6-ad411966b3d3\") " pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.788977 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c447796d-48ac-4eeb-8fe6-ad411966b3d3-utilities\") pod \"certified-operators-2lq4t\" (UID: \"c447796d-48ac-4eeb-8fe6-ad411966b3d3\") " pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.788998 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2g96\" (UniqueName: \"kubernetes.io/projected/c447796d-48ac-4eeb-8fe6-ad411966b3d3-kube-api-access-n2g96\") pod \"certified-operators-2lq4t\" (UID: \"c447796d-48ac-4eeb-8fe6-ad411966b3d3\") " pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:44:06 crc kubenswrapper[4751]: E0131 14:44:06.789124 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:07.289061257 +0000 UTC m=+151.663774142 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.829992 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.833544 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.834189 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.837557 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.837580 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.851159 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.864754 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-m4m6r"] Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.870988 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:06 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:06 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:06 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.871125 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.891036 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c447796d-48ac-4eeb-8fe6-ad411966b3d3-catalog-content\") pod \"certified-operators-2lq4t\" (UID: \"c447796d-48ac-4eeb-8fe6-ad411966b3d3\") " pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.891139 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44a681ea-f7f5-4eba-b40e-03ea17fd4bf8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"44a681ea-f7f5-4eba-b40e-03ea17fd4bf8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.891174 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c447796d-48ac-4eeb-8fe6-ad411966b3d3-utilities\") pod \"certified-operators-2lq4t\" (UID: \"c447796d-48ac-4eeb-8fe6-ad411966b3d3\") " pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.891202 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2g96\" (UniqueName: \"kubernetes.io/projected/c447796d-48ac-4eeb-8fe6-ad411966b3d3-kube-api-access-n2g96\") pod \"certified-operators-2lq4t\" (UID: \"c447796d-48ac-4eeb-8fe6-ad411966b3d3\") " pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.891245 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44a681ea-f7f5-4eba-b40e-03ea17fd4bf8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"44a681ea-f7f5-4eba-b40e-03ea17fd4bf8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.891315 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:06 crc kubenswrapper[4751]: E0131 14:44:06.891930 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:07.391904442 +0000 UTC m=+151.766617317 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.892405 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c447796d-48ac-4eeb-8fe6-ad411966b3d3-catalog-content\") pod \"certified-operators-2lq4t\" (UID: \"c447796d-48ac-4eeb-8fe6-ad411966b3d3\") " pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.892622 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c447796d-48ac-4eeb-8fe6-ad411966b3d3-utilities\") pod \"certified-operators-2lq4t\" (UID: \"c447796d-48ac-4eeb-8fe6-ad411966b3d3\") " pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.916624 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-wcnsn"] Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.919872 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2g96\" (UniqueName: \"kubernetes.io/projected/c447796d-48ac-4eeb-8fe6-ad411966b3d3-kube-api-access-n2g96\") pod \"certified-operators-2lq4t\" (UID: \"c447796d-48ac-4eeb-8fe6-ad411966b3d3\") " pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:44:06 crc kubenswrapper[4751]: W0131 14:44:06.944064 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod074619b7_9220_4377_b93d_6088199a5e16.slice/crio-092d3acc3e94a3dfd58bc12b9df82ef7950bf9b5a3e7871999c9c0efa3eb1c6d WatchSource:0}: Error finding container 092d3acc3e94a3dfd58bc12b9df82ef7950bf9b5a3e7871999c9c0efa3eb1c6d: Status 404 returned error can't find the container with id 092d3acc3e94a3dfd58bc12b9df82ef7950bf9b5a3e7871999c9c0efa3eb1c6d Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.993009 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.993836 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44a681ea-f7f5-4eba-b40e-03ea17fd4bf8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"44a681ea-f7f5-4eba-b40e-03ea17fd4bf8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.993939 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44a681ea-f7f5-4eba-b40e-03ea17fd4bf8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"44a681ea-f7f5-4eba-b40e-03ea17fd4bf8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.993949 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44a681ea-f7f5-4eba-b40e-03ea17fd4bf8-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"44a681ea-f7f5-4eba-b40e-03ea17fd4bf8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:44:06 crc kubenswrapper[4751]: E0131 14:44:06.994051 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:07.494030039 +0000 UTC m=+151.868742974 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:06 crc kubenswrapper[4751]: I0131 14:44:06.998613 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.010988 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44a681ea-f7f5-4eba-b40e-03ea17fd4bf8-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"44a681ea-f7f5-4eba-b40e-03ea17fd4bf8\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.094839 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:07 crc kubenswrapper[4751]: E0131 14:44:07.095167 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:07.59515626 +0000 UTC m=+151.969869135 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.113603 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ln2lx"] Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.155511 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.195794 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:07 crc kubenswrapper[4751]: E0131 14:44:07.196125 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:07.696108935 +0000 UTC m=+152.070821820 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:07 crc kubenswrapper[4751]: W0131 14:44:07.220389 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd5c0f5c8_cecf_451f_abef_bf357716eb71.slice/crio-a9f4794a6036dec4476c4be7ee2587554c0cf25782f49b8a2635038cb9771dcf WatchSource:0}: Error finding container a9f4794a6036dec4476c4be7ee2587554c0cf25782f49b8a2635038cb9771dcf: Status 404 returned error can't find the container with id a9f4794a6036dec4476c4be7ee2587554c0cf25782f49b8a2635038cb9771dcf Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.256896 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2lq4t"] Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.299749 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:07 crc kubenswrapper[4751]: E0131 14:44:07.300030 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:07.800017959 +0000 UTC m=+152.174730844 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.400836 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:07 crc kubenswrapper[4751]: E0131 14:44:07.401034 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:07.901002576 +0000 UTC m=+152.275715461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.401316 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:07 crc kubenswrapper[4751]: E0131 14:44:07.401699 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:07.901688644 +0000 UTC m=+152.276401529 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.411627 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.502475 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:07 crc kubenswrapper[4751]: E0131 14:44:07.502772 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:08.002757202 +0000 UTC m=+152.377470087 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.603488 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:07 crc kubenswrapper[4751]: E0131 14:44:07.603958 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:08.103940603 +0000 UTC m=+152.478653548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.704210 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:07 crc kubenswrapper[4751]: E0131 14:44:07.704421 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:08.204390986 +0000 UTC m=+152.579103871 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.704903 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:07 crc kubenswrapper[4751]: E0131 14:44:07.705213 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:08.205200327 +0000 UTC m=+152.579913212 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.789865 4751 generic.go:334] "Generic (PLEG): container finished" podID="074619b7-9220-4377-b93d-6088199a5e16" containerID="c0a252955873aa8b7cfdf7c617f1852f7e64f86f50411d0f5cc675309d6a71b6" exitCode=0 Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.789927 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcnsn" event={"ID":"074619b7-9220-4377-b93d-6088199a5e16","Type":"ContainerDied","Data":"c0a252955873aa8b7cfdf7c617f1852f7e64f86f50411d0f5cc675309d6a71b6"} Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.789953 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcnsn" event={"ID":"074619b7-9220-4377-b93d-6088199a5e16","Type":"ContainerStarted","Data":"092d3acc3e94a3dfd58bc12b9df82ef7950bf9b5a3e7871999c9c0efa3eb1c6d"} Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.791268 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.791537 4751 generic.go:334] "Generic (PLEG): container finished" podID="8d5f1383-42d7-47a1-9e47-8dba038241d2" containerID="e34fa377384a9a30f2361b80400e882c53155e0b5c8ad5f9beb3a5c178384ca0" exitCode=0 Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.791594 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4m6r" event={"ID":"8d5f1383-42d7-47a1-9e47-8dba038241d2","Type":"ContainerDied","Data":"e34fa377384a9a30f2361b80400e882c53155e0b5c8ad5f9beb3a5c178384ca0"} Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.791617 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4m6r" event={"ID":"8d5f1383-42d7-47a1-9e47-8dba038241d2","Type":"ContainerStarted","Data":"cce74deb968262c3870a67f8d4e000b52815c6a74a72fbfe9270cef7ee6b23e7"} Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.794645 4751 generic.go:334] "Generic (PLEG): container finished" podID="c447796d-48ac-4eeb-8fe6-ad411966b3d3" containerID="f55678880104a29f2f67c32892dfe2939404ec7dce246a6e2dd6c365f96de5ab" exitCode=0 Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.794694 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lq4t" event={"ID":"c447796d-48ac-4eeb-8fe6-ad411966b3d3","Type":"ContainerDied","Data":"f55678880104a29f2f67c32892dfe2939404ec7dce246a6e2dd6c365f96de5ab"} Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.794707 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lq4t" event={"ID":"c447796d-48ac-4eeb-8fe6-ad411966b3d3","Type":"ContainerStarted","Data":"5c1f5c13def0721993c42fbb7e9330a705cffc8e6326a288871d364ef1275f63"} Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.795781 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"44a681ea-f7f5-4eba-b40e-03ea17fd4bf8","Type":"ContainerStarted","Data":"370d04a4b77cb1df2a005e656252d040f2a1db0e2e84ee84b32b04f105cfd9d0"} Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.798889 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" event={"ID":"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1","Type":"ContainerStarted","Data":"3a7629ab15f1a744d9219d9345494f0ef6457c0d790da978b3f784b4ef8a6850"} Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.800085 4751 generic.go:334] "Generic (PLEG): container finished" podID="d5c0f5c8-cecf-451f-abef-bf357716eb71" containerID="4b70b0f5c40fae7241cf1b33c7ddc52732dc42394eac071686d9ade2daf20d08" exitCode=0 Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.800165 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2lx" event={"ID":"d5c0f5c8-cecf-451f-abef-bf357716eb71","Type":"ContainerDied","Data":"4b70b0f5c40fae7241cf1b33c7ddc52732dc42394eac071686d9ade2daf20d08"} Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.800188 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2lx" event={"ID":"d5c0f5c8-cecf-451f-abef-bf357716eb71","Type":"ContainerStarted","Data":"a9f4794a6036dec4476c4be7ee2587554c0cf25782f49b8a2635038cb9771dcf"} Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.805862 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:07 crc kubenswrapper[4751]: E0131 14:44:07.806166 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:08.306152503 +0000 UTC m=+152.680865378 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.868119 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:07 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:07 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:07 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.868189 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:07 crc kubenswrapper[4751]: I0131 14:44:07.908944 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:07 crc kubenswrapper[4751]: E0131 14:44:07.910613 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:08.410599761 +0000 UTC m=+152.785312656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.009713 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:08 crc kubenswrapper[4751]: E0131 14:44:08.010130 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:08.510115899 +0000 UTC m=+152.884828784 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.019133 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.038337 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.046065 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.046632 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-wdsj4" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.095214 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-k2xfl"] Jan 31 14:44:08 crc kubenswrapper[4751]: E0131 14:44:08.095457 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eade01dc-846b-42a8-a6ed-8cf0a0663e82" containerName="collect-profiles" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.095469 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eade01dc-846b-42a8-a6ed-8cf0a0663e82" containerName="collect-profiles" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.095580 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="eade01dc-846b-42a8-a6ed-8cf0a0663e82" containerName="collect-profiles" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.096369 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.097761 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2xfl"] Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.101615 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.110653 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eade01dc-846b-42a8-a6ed-8cf0a0663e82-config-volume\") pod \"eade01dc-846b-42a8-a6ed-8cf0a0663e82\" (UID: \"eade01dc-846b-42a8-a6ed-8cf0a0663e82\") " Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.110693 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eade01dc-846b-42a8-a6ed-8cf0a0663e82-secret-volume\") pod \"eade01dc-846b-42a8-a6ed-8cf0a0663e82\" (UID: \"eade01dc-846b-42a8-a6ed-8cf0a0663e82\") " Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.110892 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zwjbh\" (UniqueName: \"kubernetes.io/projected/eade01dc-846b-42a8-a6ed-8cf0a0663e82-kube-api-access-zwjbh\") pod \"eade01dc-846b-42a8-a6ed-8cf0a0663e82\" (UID: \"eade01dc-846b-42a8-a6ed-8cf0a0663e82\") " Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.111050 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.113781 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eade01dc-846b-42a8-a6ed-8cf0a0663e82-config-volume" (OuterVolumeSpecName: "config-volume") pod "eade01dc-846b-42a8-a6ed-8cf0a0663e82" (UID: "eade01dc-846b-42a8-a6ed-8cf0a0663e82"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:44:08 crc kubenswrapper[4751]: E0131 14:44:08.115859 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:08.615842121 +0000 UTC m=+152.990555006 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.121044 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eade01dc-846b-42a8-a6ed-8cf0a0663e82-kube-api-access-zwjbh" (OuterVolumeSpecName: "kube-api-access-zwjbh") pod "eade01dc-846b-42a8-a6ed-8cf0a0663e82" (UID: "eade01dc-846b-42a8-a6ed-8cf0a0663e82"). InnerVolumeSpecName "kube-api-access-zwjbh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.121914 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eade01dc-846b-42a8-a6ed-8cf0a0663e82-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "eade01dc-846b-42a8-a6ed-8cf0a0663e82" (UID: "eade01dc-846b-42a8-a6ed-8cf0a0663e82"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.169666 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-4m7jl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.169724 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4m7jl" podUID="d723501b-bb29-4d60-ad97-239eb749771f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.173980 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-4m7jl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.174013 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4m7jl" podUID="d723501b-bb29-4d60-ad97-239eb749771f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.215989 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.216170 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wkxqv\" (UniqueName: \"kubernetes.io/projected/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-kube-api-access-wkxqv\") pod \"redhat-marketplace-k2xfl\" (UID: \"e656c7af-fbd9-4e9c-ae61-d4142d37c89f\") " pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.216216 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-utilities\") pod \"redhat-marketplace-k2xfl\" (UID: \"e656c7af-fbd9-4e9c-ae61-d4142d37c89f\") " pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.216263 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-catalog-content\") pod \"redhat-marketplace-k2xfl\" (UID: \"e656c7af-fbd9-4e9c-ae61-d4142d37c89f\") " pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.216364 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zwjbh\" (UniqueName: \"kubernetes.io/projected/eade01dc-846b-42a8-a6ed-8cf0a0663e82-kube-api-access-zwjbh\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:08 crc kubenswrapper[4751]: E0131 14:44:08.216369 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:08.716350495 +0000 UTC m=+153.091063470 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.216399 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/eade01dc-846b-42a8-a6ed-8cf0a0663e82-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.216412 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/eade01dc-846b-42a8-a6ed-8cf0a0663e82-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.318874 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-catalog-content\") pod \"redhat-marketplace-k2xfl\" (UID: \"e656c7af-fbd9-4e9c-ae61-d4142d37c89f\") " pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.319049 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wkxqv\" (UniqueName: \"kubernetes.io/projected/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-kube-api-access-wkxqv\") pod \"redhat-marketplace-k2xfl\" (UID: \"e656c7af-fbd9-4e9c-ae61-d4142d37c89f\") " pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.319147 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.319232 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-utilities\") pod \"redhat-marketplace-k2xfl\" (UID: \"e656c7af-fbd9-4e9c-ae61-d4142d37c89f\") " pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.319355 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-catalog-content\") pod \"redhat-marketplace-k2xfl\" (UID: \"e656c7af-fbd9-4e9c-ae61-d4142d37c89f\") " pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.319745 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-utilities\") pod \"redhat-marketplace-k2xfl\" (UID: \"e656c7af-fbd9-4e9c-ae61-d4142d37c89f\") " pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:44:08 crc kubenswrapper[4751]: E0131 14:44:08.319883 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:08.819872758 +0000 UTC m=+153.194585643 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.337218 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wkxqv\" (UniqueName: \"kubernetes.io/projected/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-kube-api-access-wkxqv\") pod \"redhat-marketplace-k2xfl\" (UID: \"e656c7af-fbd9-4e9c-ae61-d4142d37c89f\") " pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.420452 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:08 crc kubenswrapper[4751]: E0131 14:44:08.420987 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:08.920965038 +0000 UTC m=+153.295677943 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.446684 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.484514 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-nfjx5"] Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.486059 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.499744 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfjx5"] Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.522667 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:08 crc kubenswrapper[4751]: E0131 14:44:08.523015 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.022996222 +0000 UTC m=+153.397709107 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.544417 4751 patch_prober.go:28] interesting pod/apiserver-76f77b778f-5f7jc container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 31 14:44:08 crc kubenswrapper[4751]: [+]log ok Jan 31 14:44:08 crc kubenswrapper[4751]: [+]etcd ok Jan 31 14:44:08 crc kubenswrapper[4751]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 31 14:44:08 crc kubenswrapper[4751]: [+]poststarthook/generic-apiserver-start-informers ok Jan 31 14:44:08 crc kubenswrapper[4751]: [+]poststarthook/max-in-flight-filter ok Jan 31 14:44:08 crc kubenswrapper[4751]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 31 14:44:08 crc kubenswrapper[4751]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 31 14:44:08 crc kubenswrapper[4751]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 31 14:44:08 crc kubenswrapper[4751]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 31 14:44:08 crc kubenswrapper[4751]: [+]poststarthook/project.openshift.io-projectcache ok Jan 31 14:44:08 crc kubenswrapper[4751]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 31 14:44:08 crc kubenswrapper[4751]: [+]poststarthook/openshift.io-startinformers ok Jan 31 14:44:08 crc kubenswrapper[4751]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 31 14:44:08 crc kubenswrapper[4751]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 31 14:44:08 crc kubenswrapper[4751]: livez check failed Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.544496 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" podUID="89314349-bbc8-4886-b93b-51358e4e71b0" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.605527 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.606919 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.612861 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.612899 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.623710 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:08 crc kubenswrapper[4751]: E0131 14:44:08.623853 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.123813894 +0000 UTC m=+153.498526789 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.624053 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.624114 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4nf7x\" (UniqueName: \"kubernetes.io/projected/e771b68a-beea-4c8b-a085-b869155ca20d-kube-api-access-4nf7x\") pod \"redhat-marketplace-nfjx5\" (UID: \"e771b68a-beea-4c8b-a085-b869155ca20d\") " pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.624180 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e771b68a-beea-4c8b-a085-b869155ca20d-utilities\") pod \"redhat-marketplace-nfjx5\" (UID: \"e771b68a-beea-4c8b-a085-b869155ca20d\") " pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.624292 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e771b68a-beea-4c8b-a085-b869155ca20d-catalog-content\") pod \"redhat-marketplace-nfjx5\" (UID: \"e771b68a-beea-4c8b-a085-b869155ca20d\") " pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:44:08 crc kubenswrapper[4751]: E0131 14:44:08.624684 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.124674167 +0000 UTC m=+153.499387062 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.627325 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.681252 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2xfl"] Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.725412 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:08 crc kubenswrapper[4751]: E0131 14:44:08.725581 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.225558061 +0000 UTC m=+153.600270956 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.725641 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.725672 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4nf7x\" (UniqueName: \"kubernetes.io/projected/e771b68a-beea-4c8b-a085-b869155ca20d-kube-api-access-4nf7x\") pod \"redhat-marketplace-nfjx5\" (UID: \"e771b68a-beea-4c8b-a085-b869155ca20d\") " pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.725715 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e771b68a-beea-4c8b-a085-b869155ca20d-utilities\") pod \"redhat-marketplace-nfjx5\" (UID: \"e771b68a-beea-4c8b-a085-b869155ca20d\") " pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.725768 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7322d0f6-a94f-48be-98fb-b2883f20cc53-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7322d0f6-a94f-48be-98fb-b2883f20cc53\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.725820 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7322d0f6-a94f-48be-98fb-b2883f20cc53-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7322d0f6-a94f-48be-98fb-b2883f20cc53\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.725846 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e771b68a-beea-4c8b-a085-b869155ca20d-catalog-content\") pod \"redhat-marketplace-nfjx5\" (UID: \"e771b68a-beea-4c8b-a085-b869155ca20d\") " pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.726486 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e771b68a-beea-4c8b-a085-b869155ca20d-catalog-content\") pod \"redhat-marketplace-nfjx5\" (UID: \"e771b68a-beea-4c8b-a085-b869155ca20d\") " pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.726632 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e771b68a-beea-4c8b-a085-b869155ca20d-utilities\") pod \"redhat-marketplace-nfjx5\" (UID: \"e771b68a-beea-4c8b-a085-b869155ca20d\") " pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:44:08 crc kubenswrapper[4751]: E0131 14:44:08.726807 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.226795984 +0000 UTC m=+153.601508979 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.757689 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4nf7x\" (UniqueName: \"kubernetes.io/projected/e771b68a-beea-4c8b-a085-b869155ca20d-kube-api-access-4nf7x\") pod \"redhat-marketplace-nfjx5\" (UID: \"e771b68a-beea-4c8b-a085-b869155ca20d\") " pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.815144 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.824379 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2xfl" event={"ID":"e656c7af-fbd9-4e9c-ae61-d4142d37c89f","Type":"ContainerStarted","Data":"ed378354261ea17a2d24e834a9aed8f1a45166375fb6ae1ce1dc38b9af3b5e0f"} Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.828898 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.829214 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7322d0f6-a94f-48be-98fb-b2883f20cc53-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7322d0f6-a94f-48be-98fb-b2883f20cc53\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.829260 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7322d0f6-a94f-48be-98fb-b2883f20cc53-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7322d0f6-a94f-48be-98fb-b2883f20cc53\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:44:08 crc kubenswrapper[4751]: E0131 14:44:08.829701 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.32967943 +0000 UTC m=+153.704392305 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.829736 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7322d0f6-a94f-48be-98fb-b2883f20cc53-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"7322d0f6-a94f-48be-98fb-b2883f20cc53\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.851256 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" event={"ID":"eade01dc-846b-42a8-a6ed-8cf0a0663e82","Type":"ContainerDied","Data":"9d44648df839910022878a08450dec667db28fe365908b86584da87c8884b401"} Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.851333 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9d44648df839910022878a08450dec667db28fe365908b86584da87c8884b401" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.851531 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.856988 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7322d0f6-a94f-48be-98fb-b2883f20cc53-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"7322d0f6-a94f-48be-98fb-b2883f20cc53\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.864031 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.868613 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"44a681ea-f7f5-4eba-b40e-03ea17fd4bf8","Type":"ContainerStarted","Data":"3baa617a27e83d80f5320f7cc47fc62891a992ae7b55cc71b019d15fc16ab870"} Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.869559 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:08 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:08 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:08 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.869617 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.873786 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" event={"ID":"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1","Type":"ContainerStarted","Data":"93d7520b8789253b8932588bc554d325acef997a5d57f5ac4aef79ae4024916e"} Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.883772 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=2.883751968 podStartE2EDuration="2.883751968s" podCreationTimestamp="2026-01-31 14:44:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:08.88345257 +0000 UTC m=+153.258165455" watchObservedRunningTime="2026-01-31 14:44:08.883751968 +0000 UTC m=+153.258464853" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.897488 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.897548 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.928635 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.928696 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.930152 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:08 crc kubenswrapper[4751]: E0131 14:44:08.930474 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.430459172 +0000 UTC m=+153.805172057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.931837 4751 patch_prober.go:28] interesting pod/console-f9d7485db-h262z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.931871 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-h262z" podUID="5caeb3dc-2a42-41b5-ac91-c1c8a216fb43" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.945628 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:44:08 crc kubenswrapper[4751]: I0131 14:44:08.991657 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-7hjp9" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.023364 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-vc9q2" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.026111 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.031591 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.031693 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.531675884 +0000 UTC m=+153.906388759 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.032666 4751 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.033613 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.034150 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.534136609 +0000 UTC m=+153.908849494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.048716 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-7hc86" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.081938 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfjx5"] Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.135620 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.135777 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.635756403 +0000 UTC m=+154.010469288 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.135992 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.139193 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.639161303 +0000 UTC m=+154.013874358 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.238910 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.239109 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.73905737 +0000 UTC m=+154.113770255 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.239374 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.239809 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.73979266 +0000 UTC m=+154.114505545 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.270402 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-gktqp"] Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.271410 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.273454 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.287853 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gktqp"] Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.340595 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.340783 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.840756006 +0000 UTC m=+154.215468891 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.340832 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cfb2e52-7371-4d38-994c-92b5b7d123cc-catalog-content\") pod \"redhat-operators-gktqp\" (UID: \"0cfb2e52-7371-4d38-994c-92b5b7d123cc\") " pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.340925 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qgf8m\" (UniqueName: \"kubernetes.io/projected/0cfb2e52-7371-4d38-994c-92b5b7d123cc-kube-api-access-qgf8m\") pod \"redhat-operators-gktqp\" (UID: \"0cfb2e52-7371-4d38-994c-92b5b7d123cc\") " pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.340991 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cfb2e52-7371-4d38-994c-92b5b7d123cc-utilities\") pod \"redhat-operators-gktqp\" (UID: \"0cfb2e52-7371-4d38-994c-92b5b7d123cc\") " pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.341038 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.341565 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.841539427 +0000 UTC m=+154.216252342 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.393990 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 31 14:44:09 crc kubenswrapper[4751]: W0131 14:44:09.426430 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7322d0f6_a94f_48be_98fb_b2883f20cc53.slice/crio-a3df1f6a5863eb19b6d181d426ef7f986d5e8f0fcb559160484636c7ea634096 WatchSource:0}: Error finding container a3df1f6a5863eb19b6d181d426ef7f986d5e8f0fcb559160484636c7ea634096: Status 404 returned error can't find the container with id a3df1f6a5863eb19b6d181d426ef7f986d5e8f0fcb559160484636c7ea634096 Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.442212 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.442505 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.942460642 +0000 UTC m=+154.317173537 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.443443 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qgf8m\" (UniqueName: \"kubernetes.io/projected/0cfb2e52-7371-4d38-994c-92b5b7d123cc-kube-api-access-qgf8m\") pod \"redhat-operators-gktqp\" (UID: \"0cfb2e52-7371-4d38-994c-92b5b7d123cc\") " pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.443550 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cfb2e52-7371-4d38-994c-92b5b7d123cc-utilities\") pod \"redhat-operators-gktqp\" (UID: \"0cfb2e52-7371-4d38-994c-92b5b7d123cc\") " pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.443620 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.443686 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cfb2e52-7371-4d38-994c-92b5b7d123cc-catalog-content\") pod \"redhat-operators-gktqp\" (UID: \"0cfb2e52-7371-4d38-994c-92b5b7d123cc\") " pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.444181 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:09.944157826 +0000 UTC m=+154.318870721 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.444483 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cfb2e52-7371-4d38-994c-92b5b7d123cc-catalog-content\") pod \"redhat-operators-gktqp\" (UID: \"0cfb2e52-7371-4d38-994c-92b5b7d123cc\") " pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.444840 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cfb2e52-7371-4d38-994c-92b5b7d123cc-utilities\") pod \"redhat-operators-gktqp\" (UID: \"0cfb2e52-7371-4d38-994c-92b5b7d123cc\") " pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.466219 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qgf8m\" (UniqueName: \"kubernetes.io/projected/0cfb2e52-7371-4d38-994c-92b5b7d123cc-kube-api-access-qgf8m\") pod \"redhat-operators-gktqp\" (UID: \"0cfb2e52-7371-4d38-994c-92b5b7d123cc\") " pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.545239 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.545430 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:10.04540281 +0000 UTC m=+154.420115695 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.545662 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.546026 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:10.046002166 +0000 UTC m=+154.420715051 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.588057 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.646938 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.647194 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:10.147137756 +0000 UTC m=+154.521850641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.647324 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.647791 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:10.147758383 +0000 UTC m=+154.522471268 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.674993 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-s7j7f"] Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.675981 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.687322 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7j7f"] Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.748272 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.748474 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-31 14:44:10.248442891 +0000 UTC m=+154.623155786 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.749138 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hwzm\" (UniqueName: \"kubernetes.io/projected/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-kube-api-access-8hwzm\") pod \"redhat-operators-s7j7f\" (UID: \"f614f9ab-b5e2-4548-93e7-571d1ffb57b0\") " pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.749180 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-catalog-content\") pod \"redhat-operators-s7j7f\" (UID: \"f614f9ab-b5e2-4548-93e7-571d1ffb57b0\") " pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.749335 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-utilities\") pod \"redhat-operators-s7j7f\" (UID: \"f614f9ab-b5e2-4548-93e7-571d1ffb57b0\") " pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.749393 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:09 crc kubenswrapper[4751]: E0131 14:44:09.762514 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-31 14:44:10.262485242 +0000 UTC m=+154.637198127 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-mpbgx" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.831590 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-gktqp"] Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.836595 4751 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-31T14:44:09.032942248Z","Handler":null,"Name":""} Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.839512 4751 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.839545 4751 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 31 14:44:09 crc kubenswrapper[4751]: W0131 14:44:09.841311 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0cfb2e52_7371_4d38_994c_92b5b7d123cc.slice/crio-6d8aa8d0e0300436346b38972033f042890b145471a99f8a553c2f56d280787e WatchSource:0}: Error finding container 6d8aa8d0e0300436346b38972033f042890b145471a99f8a553c2f56d280787e: Status 404 returned error can't find the container with id 6d8aa8d0e0300436346b38972033f042890b145471a99f8a553c2f56d280787e Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.851962 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.852208 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-catalog-content\") pod \"redhat-operators-s7j7f\" (UID: \"f614f9ab-b5e2-4548-93e7-571d1ffb57b0\") " pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.852252 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-utilities\") pod \"redhat-operators-s7j7f\" (UID: \"f614f9ab-b5e2-4548-93e7-571d1ffb57b0\") " pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.852329 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hwzm\" (UniqueName: \"kubernetes.io/projected/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-kube-api-access-8hwzm\") pod \"redhat-operators-s7j7f\" (UID: \"f614f9ab-b5e2-4548-93e7-571d1ffb57b0\") " pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.852746 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-catalog-content\") pod \"redhat-operators-s7j7f\" (UID: \"f614f9ab-b5e2-4548-93e7-571d1ffb57b0\") " pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.858542 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-utilities\") pod \"redhat-operators-s7j7f\" (UID: \"f614f9ab-b5e2-4548-93e7-571d1ffb57b0\") " pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.859303 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.867737 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:09 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:09 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:09 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.867786 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.873281 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hwzm\" (UniqueName: \"kubernetes.io/projected/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-kube-api-access-8hwzm\") pod \"redhat-operators-s7j7f\" (UID: \"f614f9ab-b5e2-4548-93e7-571d1ffb57b0\") " pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.889515 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" event={"ID":"a55fc688-004a-4d6f-a48e-c10b0ae1d8f1","Type":"ContainerStarted","Data":"ba5e0e5ba49d3b46b4260cd7fd4839fed6a2a5958b6941e1910ddfd9298fbde7"} Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.892317 4751 generic.go:334] "Generic (PLEG): container finished" podID="e771b68a-beea-4c8b-a085-b869155ca20d" containerID="cc1400d076f7032bfa7b9349903c39c2a8d9d2e65e96a7551c8c78a1f7255455" exitCode=0 Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.892408 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfjx5" event={"ID":"e771b68a-beea-4c8b-a085-b869155ca20d","Type":"ContainerDied","Data":"cc1400d076f7032bfa7b9349903c39c2a8d9d2e65e96a7551c8c78a1f7255455"} Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.892469 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfjx5" event={"ID":"e771b68a-beea-4c8b-a085-b869155ca20d","Type":"ContainerStarted","Data":"17e2b2135e55e973ccc015ba33cfd9e0c7a1763d73b3153f649e1c6747bac744"} Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.894339 4751 generic.go:334] "Generic (PLEG): container finished" podID="e656c7af-fbd9-4e9c-ae61-d4142d37c89f" containerID="6c75c5ad4aa0723fec261497091fc30b60d95e73f9fe993ece85f3e477da66ef" exitCode=0 Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.894409 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2xfl" event={"ID":"e656c7af-fbd9-4e9c-ae61-d4142d37c89f","Type":"ContainerDied","Data":"6c75c5ad4aa0723fec261497091fc30b60d95e73f9fe993ece85f3e477da66ef"} Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.896160 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7322d0f6-a94f-48be-98fb-b2883f20cc53","Type":"ContainerStarted","Data":"38c7f576f0ad4b5e8d74c391485eb57e1eab7f03e125ea86814743a2e11cd91c"} Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.896191 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7322d0f6-a94f-48be-98fb-b2883f20cc53","Type":"ContainerStarted","Data":"a3df1f6a5863eb19b6d181d426ef7f986d5e8f0fcb559160484636c7ea634096"} Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.897456 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gktqp" event={"ID":"0cfb2e52-7371-4d38-994c-92b5b7d123cc","Type":"ContainerStarted","Data":"6d8aa8d0e0300436346b38972033f042890b145471a99f8a553c2f56d280787e"} Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.903915 4751 generic.go:334] "Generic (PLEG): container finished" podID="44a681ea-f7f5-4eba-b40e-03ea17fd4bf8" containerID="3baa617a27e83d80f5320f7cc47fc62891a992ae7b55cc71b019d15fc16ab870" exitCode=0 Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.903951 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"44a681ea-f7f5-4eba-b40e-03ea17fd4bf8","Type":"ContainerDied","Data":"3baa617a27e83d80f5320f7cc47fc62891a992ae7b55cc71b019d15fc16ab870"} Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.915750 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-x4rnh" podStartSLOduration=14.915717788 podStartE2EDuration="14.915717788s" podCreationTimestamp="2026-01-31 14:43:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:09.909290999 +0000 UTC m=+154.284003884" watchObservedRunningTime="2026-01-31 14:44:09.915717788 +0000 UTC m=+154.290430673" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.944990 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=1.944973141 podStartE2EDuration="1.944973141s" podCreationTimestamp="2026-01-31 14:44:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:09.928333372 +0000 UTC m=+154.303046267" watchObservedRunningTime="2026-01-31 14:44:09.944973141 +0000 UTC m=+154.319686026" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.954232 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.961118 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 14:44:09 crc kubenswrapper[4751]: I0131 14:44:09.961160 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:10 crc kubenswrapper[4751]: I0131 14:44:09.999965 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:44:10 crc kubenswrapper[4751]: I0131 14:44:10.000153 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-mpbgx\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:10 crc kubenswrapper[4751]: I0131 14:44:10.221289 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-s7j7f"] Jan 31 14:44:10 crc kubenswrapper[4751]: I0131 14:44:10.276522 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:10 crc kubenswrapper[4751]: I0131 14:44:10.441935 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 31 14:44:10 crc kubenswrapper[4751]: I0131 14:44:10.555002 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mpbgx"] Jan 31 14:44:10 crc kubenswrapper[4751]: I0131 14:44:10.870644 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:10 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:10 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:10 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:10 crc kubenswrapper[4751]: I0131 14:44:10.870712 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:10 crc kubenswrapper[4751]: I0131 14:44:10.911023 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7j7f" event={"ID":"f614f9ab-b5e2-4548-93e7-571d1ffb57b0","Type":"ContainerStarted","Data":"7b416007999209b30e30ac3cbb706b9a31917cc6ff3256ae9a397696b89670d4"} Jan 31 14:44:10 crc kubenswrapper[4751]: I0131 14:44:10.913539 4751 generic.go:334] "Generic (PLEG): container finished" podID="0cfb2e52-7371-4d38-994c-92b5b7d123cc" containerID="aa50b668454ba4cf1d6033028034c77daf53f009e58a1184a7d22b857abf8b23" exitCode=0 Jan 31 14:44:10 crc kubenswrapper[4751]: I0131 14:44:10.913589 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gktqp" event={"ID":"0cfb2e52-7371-4d38-994c-92b5b7d123cc","Type":"ContainerDied","Data":"aa50b668454ba4cf1d6033028034c77daf53f009e58a1184a7d22b857abf8b23"} Jan 31 14:44:10 crc kubenswrapper[4751]: I0131 14:44:10.917529 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" event={"ID":"4e18e163-6cf0-48ef-9a6f-90cbece870b0","Type":"ContainerStarted","Data":"f189ebd73b2de2ffc6329477d3690421c7e4c89608c81de50df6ebb8b9b1c5e0"} Jan 31 14:44:11 crc kubenswrapper[4751]: I0131 14:44:11.174549 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:44:11 crc kubenswrapper[4751]: I0131 14:44:11.276267 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44a681ea-f7f5-4eba-b40e-03ea17fd4bf8-kubelet-dir\") pod \"44a681ea-f7f5-4eba-b40e-03ea17fd4bf8\" (UID: \"44a681ea-f7f5-4eba-b40e-03ea17fd4bf8\") " Jan 31 14:44:11 crc kubenswrapper[4751]: I0131 14:44:11.276338 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44a681ea-f7f5-4eba-b40e-03ea17fd4bf8-kube-api-access\") pod \"44a681ea-f7f5-4eba-b40e-03ea17fd4bf8\" (UID: \"44a681ea-f7f5-4eba-b40e-03ea17fd4bf8\") " Jan 31 14:44:11 crc kubenswrapper[4751]: I0131 14:44:11.279235 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/44a681ea-f7f5-4eba-b40e-03ea17fd4bf8-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "44a681ea-f7f5-4eba-b40e-03ea17fd4bf8" (UID: "44a681ea-f7f5-4eba-b40e-03ea17fd4bf8"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:44:11 crc kubenswrapper[4751]: I0131 14:44:11.282374 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44a681ea-f7f5-4eba-b40e-03ea17fd4bf8-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "44a681ea-f7f5-4eba-b40e-03ea17fd4bf8" (UID: "44a681ea-f7f5-4eba-b40e-03ea17fd4bf8"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:44:11 crc kubenswrapper[4751]: I0131 14:44:11.378420 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/44a681ea-f7f5-4eba-b40e-03ea17fd4bf8-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:11 crc kubenswrapper[4751]: I0131 14:44:11.378460 4751 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/44a681ea-f7f5-4eba-b40e-03ea17fd4bf8-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:12 crc kubenswrapper[4751]: I0131 14:44:11.867116 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:12 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:12 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:12 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:12 crc kubenswrapper[4751]: I0131 14:44:11.867195 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:12 crc kubenswrapper[4751]: I0131 14:44:12.042344 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"44a681ea-f7f5-4eba-b40e-03ea17fd4bf8","Type":"ContainerDied","Data":"370d04a4b77cb1df2a005e656252d040f2a1db0e2e84ee84b32b04f105cfd9d0"} Jan 31 14:44:12 crc kubenswrapper[4751]: I0131 14:44:12.042419 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="370d04a4b77cb1df2a005e656252d040f2a1db0e2e84ee84b32b04f105cfd9d0" Jan 31 14:44:12 crc kubenswrapper[4751]: I0131 14:44:12.042418 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 31 14:44:12 crc kubenswrapper[4751]: I0131 14:44:12.867468 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:12 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:12 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:12 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:12 crc kubenswrapper[4751]: I0131 14:44:12.867539 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:13 crc kubenswrapper[4751]: I0131 14:44:13.049606 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" event={"ID":"4e18e163-6cf0-48ef-9a6f-90cbece870b0","Type":"ContainerStarted","Data":"4a4776950d27c1d1245ca6dd71fb7012b30d42bb2d21525539ad27b3f377c032"} Jan 31 14:44:13 crc kubenswrapper[4751]: I0131 14:44:13.049788 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:13 crc kubenswrapper[4751]: I0131 14:44:13.053285 4751 generic.go:334] "Generic (PLEG): container finished" podID="f614f9ab-b5e2-4548-93e7-571d1ffb57b0" containerID="9dea3e4098c379086439a00ba95f58535865ef9c6e3300b004af608a3da30bb4" exitCode=0 Jan 31 14:44:13 crc kubenswrapper[4751]: I0131 14:44:13.053301 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7j7f" event={"ID":"f614f9ab-b5e2-4548-93e7-571d1ffb57b0","Type":"ContainerDied","Data":"9dea3e4098c379086439a00ba95f58535865ef9c6e3300b004af608a3da30bb4"} Jan 31 14:44:13 crc kubenswrapper[4751]: I0131 14:44:13.055661 4751 generic.go:334] "Generic (PLEG): container finished" podID="7322d0f6-a94f-48be-98fb-b2883f20cc53" containerID="38c7f576f0ad4b5e8d74c391485eb57e1eab7f03e125ea86814743a2e11cd91c" exitCode=0 Jan 31 14:44:13 crc kubenswrapper[4751]: I0131 14:44:13.055694 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7322d0f6-a94f-48be-98fb-b2883f20cc53","Type":"ContainerDied","Data":"38c7f576f0ad4b5e8d74c391485eb57e1eab7f03e125ea86814743a2e11cd91c"} Jan 31 14:44:13 crc kubenswrapper[4751]: I0131 14:44:13.070470 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" podStartSLOduration=132.070453942 podStartE2EDuration="2m12.070453942s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:13.069353943 +0000 UTC m=+157.444066838" watchObservedRunningTime="2026-01-31 14:44:13.070453942 +0000 UTC m=+157.445166847" Jan 31 14:44:13 crc kubenswrapper[4751]: I0131 14:44:13.540812 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:44:13 crc kubenswrapper[4751]: I0131 14:44:13.545394 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-5f7jc" Jan 31 14:44:13 crc kubenswrapper[4751]: I0131 14:44:13.868240 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:13 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:13 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:13 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:13 crc kubenswrapper[4751]: I0131 14:44:13.868748 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:14 crc kubenswrapper[4751]: I0131 14:44:14.047890 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-skzbg" Jan 31 14:44:14 crc kubenswrapper[4751]: I0131 14:44:14.866139 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:14 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:14 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:14 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:14 crc kubenswrapper[4751]: I0131 14:44:14.866181 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:15 crc kubenswrapper[4751]: I0131 14:44:15.866414 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:15 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:15 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:15 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:15 crc kubenswrapper[4751]: I0131 14:44:15.866750 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:16 crc kubenswrapper[4751]: I0131 14:44:16.866984 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:16 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:16 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:16 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:16 crc kubenswrapper[4751]: I0131 14:44:16.867039 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:17 crc kubenswrapper[4751]: I0131 14:44:17.866437 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:17 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:17 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:17 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:17 crc kubenswrapper[4751]: I0131 14:44:17.866755 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:18 crc kubenswrapper[4751]: I0131 14:44:18.166542 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-4m7jl container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 31 14:44:18 crc kubenswrapper[4751]: I0131 14:44:18.166597 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-4m7jl" podUID="d723501b-bb29-4d60-ad97-239eb749771f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 31 14:44:18 crc kubenswrapper[4751]: I0131 14:44:18.166606 4751 patch_prober.go:28] interesting pod/downloads-7954f5f757-4m7jl container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" start-of-body= Jan 31 14:44:18 crc kubenswrapper[4751]: I0131 14:44:18.166673 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-4m7jl" podUID="d723501b-bb29-4d60-ad97-239eb749771f" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.25:8080/\": dial tcp 10.217.0.25:8080: connect: connection refused" Jan 31 14:44:18 crc kubenswrapper[4751]: I0131 14:44:18.865907 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:18 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:18 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:18 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:18 crc kubenswrapper[4751]: I0131 14:44:18.865974 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:18 crc kubenswrapper[4751]: I0131 14:44:18.929030 4751 patch_prober.go:28] interesting pod/console-f9d7485db-h262z container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" start-of-body= Jan 31 14:44:18 crc kubenswrapper[4751]: I0131 14:44:18.929097 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-h262z" podUID="5caeb3dc-2a42-41b5-ac91-c1c8a216fb43" containerName="console" probeResult="failure" output="Get \"https://10.217.0.17:8443/health\": dial tcp 10.217.0.17:8443: connect: connection refused" Jan 31 14:44:19 crc kubenswrapper[4751]: I0131 14:44:19.866838 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:19 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:19 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:19 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:19 crc kubenswrapper[4751]: I0131 14:44:19.866894 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:20 crc kubenswrapper[4751]: I0131 14:44:20.868039 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:20 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:20 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:20 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:20 crc kubenswrapper[4751]: I0131 14:44:20.868122 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:21 crc kubenswrapper[4751]: I0131 14:44:21.352415 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:44:21 crc kubenswrapper[4751]: I0131 14:44:21.437440 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7322d0f6-a94f-48be-98fb-b2883f20cc53-kubelet-dir\") pod \"7322d0f6-a94f-48be-98fb-b2883f20cc53\" (UID: \"7322d0f6-a94f-48be-98fb-b2883f20cc53\") " Jan 31 14:44:21 crc kubenswrapper[4751]: I0131 14:44:21.437562 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7322d0f6-a94f-48be-98fb-b2883f20cc53-kube-api-access\") pod \"7322d0f6-a94f-48be-98fb-b2883f20cc53\" (UID: \"7322d0f6-a94f-48be-98fb-b2883f20cc53\") " Jan 31 14:44:21 crc kubenswrapper[4751]: I0131 14:44:21.437577 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7322d0f6-a94f-48be-98fb-b2883f20cc53-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7322d0f6-a94f-48be-98fb-b2883f20cc53" (UID: "7322d0f6-a94f-48be-98fb-b2883f20cc53"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:44:21 crc kubenswrapper[4751]: I0131 14:44:21.437795 4751 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7322d0f6-a94f-48be-98fb-b2883f20cc53-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:21 crc kubenswrapper[4751]: I0131 14:44:21.442671 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7322d0f6-a94f-48be-98fb-b2883f20cc53-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7322d0f6-a94f-48be-98fb-b2883f20cc53" (UID: "7322d0f6-a94f-48be-98fb-b2883f20cc53"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:44:21 crc kubenswrapper[4751]: I0131 14:44:21.539130 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7322d0f6-a94f-48be-98fb-b2883f20cc53-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:21 crc kubenswrapper[4751]: I0131 14:44:21.866886 4751 patch_prober.go:28] interesting pod/router-default-5444994796-5hn9b container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 31 14:44:21 crc kubenswrapper[4751]: [-]has-synced failed: reason withheld Jan 31 14:44:21 crc kubenswrapper[4751]: [+]process-running ok Jan 31 14:44:21 crc kubenswrapper[4751]: healthz check failed Jan 31 14:44:21 crc kubenswrapper[4751]: I0131 14:44:21.866963 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-5hn9b" podUID="01ff1674-4e01-4cdc-aea3-1e91a6a389e3" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 31 14:44:22 crc kubenswrapper[4751]: I0131 14:44:22.122046 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"7322d0f6-a94f-48be-98fb-b2883f20cc53","Type":"ContainerDied","Data":"a3df1f6a5863eb19b6d181d426ef7f986d5e8f0fcb559160484636c7ea634096"} Jan 31 14:44:22 crc kubenswrapper[4751]: I0131 14:44:22.122105 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3df1f6a5863eb19b6d181d426ef7f986d5e8f0fcb559160484636c7ea634096" Jan 31 14:44:22 crc kubenswrapper[4751]: I0131 14:44:22.122108 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 31 14:44:22 crc kubenswrapper[4751]: I0131 14:44:22.867136 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:44:22 crc kubenswrapper[4751]: I0131 14:44:22.869840 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-5hn9b" Jan 31 14:44:23 crc kubenswrapper[4751]: I0131 14:44:23.969898 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs\") pod \"network-metrics-daemon-xtn6l\" (UID: \"68aeb9c7-d3c3-4c34-96ab-bb947421c504\") " pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:44:23 crc kubenswrapper[4751]: I0131 14:44:23.976868 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/68aeb9c7-d3c3-4c34-96ab-bb947421c504-metrics-certs\") pod \"network-metrics-daemon-xtn6l\" (UID: \"68aeb9c7-d3c3-4c34-96ab-bb947421c504\") " pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:44:24 crc kubenswrapper[4751]: I0131 14:44:24.010685 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sxjf5"] Jan 31 14:44:24 crc kubenswrapper[4751]: I0131 14:44:24.011006 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" podUID="c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" containerName="controller-manager" containerID="cri-o://96a0531e47323a9257c24b651a7067cc71a6c2a1c9189022bfa8c72e23c446c1" gracePeriod=30 Jan 31 14:44:24 crc kubenswrapper[4751]: I0131 14:44:24.024458 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-xtn6l" Jan 31 14:44:24 crc kubenswrapper[4751]: I0131 14:44:24.025895 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w"] Jan 31 14:44:24 crc kubenswrapper[4751]: I0131 14:44:24.026419 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" podUID="84e2930a-5ae3-4171-a3dd-e5eea62ef157" containerName="route-controller-manager" containerID="cri-o://8e402889398f0b5d93bacd46f42378e3cdc7f2ee478995578d04804d8ec0f029" gracePeriod=30 Jan 31 14:44:25 crc kubenswrapper[4751]: I0131 14:44:25.153119 4751 generic.go:334] "Generic (PLEG): container finished" podID="84e2930a-5ae3-4171-a3dd-e5eea62ef157" containerID="8e402889398f0b5d93bacd46f42378e3cdc7f2ee478995578d04804d8ec0f029" exitCode=0 Jan 31 14:44:25 crc kubenswrapper[4751]: I0131 14:44:25.153211 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" event={"ID":"84e2930a-5ae3-4171-a3dd-e5eea62ef157","Type":"ContainerDied","Data":"8e402889398f0b5d93bacd46f42378e3cdc7f2ee478995578d04804d8ec0f029"} Jan 31 14:44:25 crc kubenswrapper[4751]: I0131 14:44:25.154560 4751 generic.go:334] "Generic (PLEG): container finished" podID="c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" containerID="96a0531e47323a9257c24b651a7067cc71a6c2a1c9189022bfa8c72e23c446c1" exitCode=0 Jan 31 14:44:25 crc kubenswrapper[4751]: I0131 14:44:25.154587 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" event={"ID":"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8","Type":"ContainerDied","Data":"96a0531e47323a9257c24b651a7067cc71a6c2a1c9189022bfa8c72e23c446c1"} Jan 31 14:44:28 crc kubenswrapper[4751]: I0131 14:44:28.172329 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-4m7jl" Jan 31 14:44:28 crc kubenswrapper[4751]: I0131 14:44:28.937922 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:44:28 crc kubenswrapper[4751]: I0131 14:44:28.946740 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-h262z" Jan 31 14:44:29 crc kubenswrapper[4751]: I0131 14:44:29.014485 4751 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-sxjf5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:44:29 crc kubenswrapper[4751]: I0131 14:44:29.014573 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" podUID="c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:44:29 crc kubenswrapper[4751]: I0131 14:44:29.839328 4751 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7762w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:44:29 crc kubenswrapper[4751]: I0131 14:44:29.839409 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" podUID="84e2930a-5ae3-4171-a3dd-e5eea62ef157" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:44:30 crc kubenswrapper[4751]: I0131 14:44:30.285805 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:44:36 crc kubenswrapper[4751]: E0131 14:44:36.600269 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 31 14:44:36 crc kubenswrapper[4751]: E0131 14:44:36.601327 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xd7bp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-ln2lx_openshift-marketplace(d5c0f5c8-cecf-451f-abef-bf357716eb71): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:44:36 crc kubenswrapper[4751]: E0131 14:44:36.602542 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-ln2lx" podUID="d5c0f5c8-cecf-451f-abef-bf357716eb71" Jan 31 14:44:38 crc kubenswrapper[4751]: I0131 14:44:38.896786 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:44:38 crc kubenswrapper[4751]: I0131 14:44:38.897268 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:44:39 crc kubenswrapper[4751]: I0131 14:44:39.014751 4751 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-sxjf5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:44:39 crc kubenswrapper[4751]: I0131 14:44:39.015344 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" podUID="c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:44:39 crc kubenswrapper[4751]: I0131 14:44:39.279454 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-kvfvk" Jan 31 14:44:39 crc kubenswrapper[4751]: I0131 14:44:39.839818 4751 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7762w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded" start-of-body= Jan 31 14:44:39 crc kubenswrapper[4751]: I0131 14:44:39.839912 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" podUID="84e2930a-5ae3-4171-a3dd-e5eea62ef157" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": context deadline exceeded" Jan 31 14:44:45 crc kubenswrapper[4751]: E0131 14:44:45.164920 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-ln2lx" podUID="d5c0f5c8-cecf-451f-abef-bf357716eb71" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.607456 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 14:44:45 crc kubenswrapper[4751]: E0131 14:44:45.607840 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44a681ea-f7f5-4eba-b40e-03ea17fd4bf8" containerName="pruner" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.607867 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="44a681ea-f7f5-4eba-b40e-03ea17fd4bf8" containerName="pruner" Jan 31 14:44:45 crc kubenswrapper[4751]: E0131 14:44:45.607895 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7322d0f6-a94f-48be-98fb-b2883f20cc53" containerName="pruner" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.607911 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7322d0f6-a94f-48be-98fb-b2883f20cc53" containerName="pruner" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.608185 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="44a681ea-f7f5-4eba-b40e-03ea17fd4bf8" containerName="pruner" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.608215 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7322d0f6-a94f-48be-98fb-b2883f20cc53" containerName="pruner" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.609010 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.618414 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.618498 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.622160 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.657916 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.729189 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ee27ad5-3acb-4388-a964-3b526b79e776-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4ee27ad5-3acb-4388-a964-3b526b79e776\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.729427 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee27ad5-3acb-4388-a964-3b526b79e776-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4ee27ad5-3acb-4388-a964-3b526b79e776\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.830154 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee27ad5-3acb-4388-a964-3b526b79e776-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4ee27ad5-3acb-4388-a964-3b526b79e776\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.830292 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ee27ad5-3acb-4388-a964-3b526b79e776-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4ee27ad5-3acb-4388-a964-3b526b79e776\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.830400 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ee27ad5-3acb-4388-a964-3b526b79e776-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"4ee27ad5-3acb-4388-a964-3b526b79e776\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.852872 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee27ad5-3acb-4388-a964-3b526b79e776-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"4ee27ad5-3acb-4388-a964-3b526b79e776\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:44:45 crc kubenswrapper[4751]: I0131 14:44:45.983136 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:44:49 crc kubenswrapper[4751]: I0131 14:44:49.015016 4751 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-sxjf5 container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:44:49 crc kubenswrapper[4751]: I0131 14:44:49.015090 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" podUID="c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.5:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:44:49 crc kubenswrapper[4751]: E0131 14:44:49.044582 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 14:44:49 crc kubenswrapper[4751]: E0131 14:44:49.044869 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n2g96,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-2lq4t_openshift-marketplace(c447796d-48ac-4eeb-8fe6-ad411966b3d3): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:44:49 crc kubenswrapper[4751]: E0131 14:44:49.046246 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-2lq4t" podUID="c447796d-48ac-4eeb-8fe6-ad411966b3d3" Jan 31 14:44:49 crc kubenswrapper[4751]: I0131 14:44:49.839584 4751 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-7762w container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 31 14:44:49 crc kubenswrapper[4751]: I0131 14:44:49.839730 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" podUID="84e2930a-5ae3-4171-a3dd-e5eea62ef157" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.10:8443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.004255 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.005627 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.016023 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.106772 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-var-lock\") pod \"installer-9-crc\" (UID: \"ccfa0c88-7f51-4d85-8a49-e05865c6a06e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.107130 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ccfa0c88-7f51-4d85-8a49-e05865c6a06e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.107192 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-kube-api-access\") pod \"installer-9-crc\" (UID: \"ccfa0c88-7f51-4d85-8a49-e05865c6a06e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.208308 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ccfa0c88-7f51-4d85-8a49-e05865c6a06e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.208393 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-kube-api-access\") pod \"installer-9-crc\" (UID: \"ccfa0c88-7f51-4d85-8a49-e05865c6a06e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.208442 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-var-lock\") pod \"installer-9-crc\" (UID: \"ccfa0c88-7f51-4d85-8a49-e05865c6a06e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.208552 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-var-lock\") pod \"installer-9-crc\" (UID: \"ccfa0c88-7f51-4d85-8a49-e05865c6a06e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.208612 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-kubelet-dir\") pod \"installer-9-crc\" (UID: \"ccfa0c88-7f51-4d85-8a49-e05865c6a06e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.240728 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-kube-api-access\") pod \"installer-9-crc\" (UID: \"ccfa0c88-7f51-4d85-8a49-e05865c6a06e\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:44:50 crc kubenswrapper[4751]: E0131 14:44:50.291119 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-2lq4t" podUID="c447796d-48ac-4eeb-8fe6-ad411966b3d3" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.317150 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" event={"ID":"84e2930a-5ae3-4171-a3dd-e5eea62ef157","Type":"ContainerDied","Data":"15e734ffd4fba2493be6a9b1bfbac50c0f6bd9a8e2ffdca45f856621c3703f44"} Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.317206 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15e734ffd4fba2493be6a9b1bfbac50c0f6bd9a8e2ffdca45f856621c3703f44" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.320999 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" event={"ID":"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8","Type":"ContainerDied","Data":"f58e74380a8c1e3f0d559b0c6a44b9911f247b06dc418233b9c41d9a25e6f05e"} Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.321039 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f58e74380a8c1e3f0d559b0c6a44b9911f247b06dc418233b9c41d9a25e6f05e" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.335981 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.349434 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.355215 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.402834 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f"] Jan 31 14:44:50 crc kubenswrapper[4751]: E0131 14:44:50.403186 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" containerName="controller-manager" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.403207 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" containerName="controller-manager" Jan 31 14:44:50 crc kubenswrapper[4751]: E0131 14:44:50.403218 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="84e2930a-5ae3-4171-a3dd-e5eea62ef157" containerName="route-controller-manager" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.403228 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="84e2930a-5ae3-4171-a3dd-e5eea62ef157" containerName="route-controller-manager" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.403366 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="84e2930a-5ae3-4171-a3dd-e5eea62ef157" containerName="route-controller-manager" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.403384 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" containerName="controller-manager" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.406178 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.411850 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-config\") pod \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.411903 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-proxy-ca-bundles\") pod \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.411941 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fbprf\" (UniqueName: \"kubernetes.io/projected/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-kube-api-access-fbprf\") pod \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.411966 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e2930a-5ae3-4171-a3dd-e5eea62ef157-serving-cert\") pod \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.412004 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e2930a-5ae3-4171-a3dd-e5eea62ef157-config\") pod \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.412040 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e2930a-5ae3-4171-a3dd-e5eea62ef157-client-ca\") pod \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.412084 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ksqdw\" (UniqueName: \"kubernetes.io/projected/84e2930a-5ae3-4171-a3dd-e5eea62ef157-kube-api-access-ksqdw\") pod \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\" (UID: \"84e2930a-5ae3-4171-a3dd-e5eea62ef157\") " Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.412121 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-serving-cert\") pod \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.412162 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-client-ca\") pod \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\" (UID: \"c1e92f9b-2291-4bd5-80b8-c2f9e667acf8\") " Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.412261 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78cdf25f-daec-4cd7-8954-1fef6f3727db-client-ca\") pod \"route-controller-manager-6fb78c7854-7b78f\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.412332 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78cdf25f-daec-4cd7-8954-1fef6f3727db-serving-cert\") pod \"route-controller-manager-6fb78c7854-7b78f\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.412369 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78cdf25f-daec-4cd7-8954-1fef6f3727db-config\") pod \"route-controller-manager-6fb78c7854-7b78f\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.412400 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6tv6b\" (UniqueName: \"kubernetes.io/projected/78cdf25f-daec-4cd7-8954-1fef6f3727db-kube-api-access-6tv6b\") pod \"route-controller-manager-6fb78c7854-7b78f\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.413319 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-config" (OuterVolumeSpecName: "config") pod "c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" (UID: "c1e92f9b-2291-4bd5-80b8-c2f9e667acf8"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.413895 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" (UID: "c1e92f9b-2291-4bd5-80b8-c2f9e667acf8"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.415009 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-client-ca" (OuterVolumeSpecName: "client-ca") pod "c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" (UID: "c1e92f9b-2291-4bd5-80b8-c2f9e667acf8"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.415508 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e2930a-5ae3-4171-a3dd-e5eea62ef157-client-ca" (OuterVolumeSpecName: "client-ca") pod "84e2930a-5ae3-4171-a3dd-e5eea62ef157" (UID: "84e2930a-5ae3-4171-a3dd-e5eea62ef157"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.415717 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/84e2930a-5ae3-4171-a3dd-e5eea62ef157-config" (OuterVolumeSpecName: "config") pod "84e2930a-5ae3-4171-a3dd-e5eea62ef157" (UID: "84e2930a-5ae3-4171-a3dd-e5eea62ef157"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.427642 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-kube-api-access-fbprf" (OuterVolumeSpecName: "kube-api-access-fbprf") pod "c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" (UID: "c1e92f9b-2291-4bd5-80b8-c2f9e667acf8"). InnerVolumeSpecName "kube-api-access-fbprf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.428226 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/84e2930a-5ae3-4171-a3dd-e5eea62ef157-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "84e2930a-5ae3-4171-a3dd-e5eea62ef157" (UID: "84e2930a-5ae3-4171-a3dd-e5eea62ef157"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.428395 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" (UID: "c1e92f9b-2291-4bd5-80b8-c2f9e667acf8"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.428568 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/84e2930a-5ae3-4171-a3dd-e5eea62ef157-kube-api-access-ksqdw" (OuterVolumeSpecName: "kube-api-access-ksqdw") pod "84e2930a-5ae3-4171-a3dd-e5eea62ef157" (UID: "84e2930a-5ae3-4171-a3dd-e5eea62ef157"). InnerVolumeSpecName "kube-api-access-ksqdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.469160 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f"] Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.513449 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78cdf25f-daec-4cd7-8954-1fef6f3727db-serving-cert\") pod \"route-controller-manager-6fb78c7854-7b78f\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.513514 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78cdf25f-daec-4cd7-8954-1fef6f3727db-config\") pod \"route-controller-manager-6fb78c7854-7b78f\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.513551 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6tv6b\" (UniqueName: \"kubernetes.io/projected/78cdf25f-daec-4cd7-8954-1fef6f3727db-kube-api-access-6tv6b\") pod \"route-controller-manager-6fb78c7854-7b78f\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.513597 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78cdf25f-daec-4cd7-8954-1fef6f3727db-client-ca\") pod \"route-controller-manager-6fb78c7854-7b78f\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.513656 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ksqdw\" (UniqueName: \"kubernetes.io/projected/84e2930a-5ae3-4171-a3dd-e5eea62ef157-kube-api-access-ksqdw\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.513682 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.513701 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.513716 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.513731 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.513749 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fbprf\" (UniqueName: \"kubernetes.io/projected/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8-kube-api-access-fbprf\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.513761 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/84e2930a-5ae3-4171-a3dd-e5eea62ef157-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.513772 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/84e2930a-5ae3-4171-a3dd-e5eea62ef157-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.513782 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/84e2930a-5ae3-4171-a3dd-e5eea62ef157-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.514829 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78cdf25f-daec-4cd7-8954-1fef6f3727db-client-ca\") pod \"route-controller-manager-6fb78c7854-7b78f\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.516945 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78cdf25f-daec-4cd7-8954-1fef6f3727db-config\") pod \"route-controller-manager-6fb78c7854-7b78f\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.517412 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78cdf25f-daec-4cd7-8954-1fef6f3727db-serving-cert\") pod \"route-controller-manager-6fb78c7854-7b78f\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.534156 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6tv6b\" (UniqueName: \"kubernetes.io/projected/78cdf25f-daec-4cd7-8954-1fef6f3727db-kube-api-access-6tv6b\") pod \"route-controller-manager-6fb78c7854-7b78f\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:50 crc kubenswrapper[4751]: I0131 14:44:50.771726 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:51 crc kubenswrapper[4751]: I0131 14:44:51.326396 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-sxjf5" Jan 31 14:44:51 crc kubenswrapper[4751]: I0131 14:44:51.326423 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w" Jan 31 14:44:51 crc kubenswrapper[4751]: I0131 14:44:51.353330 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sxjf5"] Jan 31 14:44:51 crc kubenswrapper[4751]: I0131 14:44:51.361522 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-sxjf5"] Jan 31 14:44:51 crc kubenswrapper[4751]: I0131 14:44:51.366610 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w"] Jan 31 14:44:51 crc kubenswrapper[4751]: I0131 14:44:51.370174 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-7762w"] Jan 31 14:44:52 crc kubenswrapper[4751]: I0131 14:44:52.419954 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="84e2930a-5ae3-4171-a3dd-e5eea62ef157" path="/var/lib/kubelet/pods/84e2930a-5ae3-4171-a3dd-e5eea62ef157/volumes" Jan 31 14:44:52 crc kubenswrapper[4751]: I0131 14:44:52.421045 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c1e92f9b-2291-4bd5-80b8-c2f9e667acf8" path="/var/lib/kubelet/pods/c1e92f9b-2291-4bd5-80b8-c2f9e667acf8/volumes" Jan 31 14:44:52 crc kubenswrapper[4751]: E0131 14:44:52.522757 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 31 14:44:52 crc kubenswrapper[4751]: E0131 14:44:52.522988 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-566b8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-m4m6r_openshift-marketplace(8d5f1383-42d7-47a1-9e47-8dba038241d2): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:44:52 crc kubenswrapper[4751]: E0131 14:44:52.524330 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-m4m6r" podUID="8d5f1383-42d7-47a1-9e47-8dba038241d2" Jan 31 14:44:52 crc kubenswrapper[4751]: E0131 14:44:52.772490 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 14:44:52 crc kubenswrapper[4751]: E0131 14:44:52.772790 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wkxqv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-k2xfl_openshift-marketplace(e656c7af-fbd9-4e9c-ae61-d4142d37c89f): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:44:52 crc kubenswrapper[4751]: E0131 14:44:52.774453 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-k2xfl" podUID="e656c7af-fbd9-4e9c-ae61-d4142d37c89f" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.110358 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc"] Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.111896 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.116150 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.116728 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.116775 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.116878 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.117050 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.123552 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.124804 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.133027 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc"] Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.251843 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-config\") pod \"controller-manager-9ff5cfb77-zl2hc\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.251924 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7689427f-2c92-4b56-9617-1139504142ee-serving-cert\") pod \"controller-manager-9ff5cfb77-zl2hc\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.252059 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-krs4f\" (UniqueName: \"kubernetes.io/projected/7689427f-2c92-4b56-9617-1139504142ee-kube-api-access-krs4f\") pod \"controller-manager-9ff5cfb77-zl2hc\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.252169 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-proxy-ca-bundles\") pod \"controller-manager-9ff5cfb77-zl2hc\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.252235 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-client-ca\") pod \"controller-manager-9ff5cfb77-zl2hc\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.353948 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-krs4f\" (UniqueName: \"kubernetes.io/projected/7689427f-2c92-4b56-9617-1139504142ee-kube-api-access-krs4f\") pod \"controller-manager-9ff5cfb77-zl2hc\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.354267 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-proxy-ca-bundles\") pod \"controller-manager-9ff5cfb77-zl2hc\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.354381 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-client-ca\") pod \"controller-manager-9ff5cfb77-zl2hc\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.355776 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-client-ca\") pod \"controller-manager-9ff5cfb77-zl2hc\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.356006 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-proxy-ca-bundles\") pod \"controller-manager-9ff5cfb77-zl2hc\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.357905 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-config\") pod \"controller-manager-9ff5cfb77-zl2hc\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.357989 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-config\") pod \"controller-manager-9ff5cfb77-zl2hc\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.358062 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7689427f-2c92-4b56-9617-1139504142ee-serving-cert\") pod \"controller-manager-9ff5cfb77-zl2hc\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.369600 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7689427f-2c92-4b56-9617-1139504142ee-serving-cert\") pod \"controller-manager-9ff5cfb77-zl2hc\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.400528 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-krs4f\" (UniqueName: \"kubernetes.io/projected/7689427f-2c92-4b56-9617-1139504142ee-kube-api-access-krs4f\") pod \"controller-manager-9ff5cfb77-zl2hc\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.435294 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:53 crc kubenswrapper[4751]: E0131 14:44:53.584612 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-k2xfl" podUID="e656c7af-fbd9-4e9c-ae61-d4142d37c89f" Jan 31 14:44:53 crc kubenswrapper[4751]: E0131 14:44:53.587265 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-m4m6r" podUID="8d5f1383-42d7-47a1-9e47-8dba038241d2" Jan 31 14:44:53 crc kubenswrapper[4751]: E0131 14:44:53.662401 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 31 14:44:53 crc kubenswrapper[4751]: E0131 14:44:53.662647 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-qgf8m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-gktqp_openshift-marketplace(0cfb2e52-7371-4d38-994c-92b5b7d123cc): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:44:53 crc kubenswrapper[4751]: E0131 14:44:53.664028 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-gktqp" podUID="0cfb2e52-7371-4d38-994c-92b5b7d123cc" Jan 31 14:44:53 crc kubenswrapper[4751]: E0131 14:44:53.665737 4751 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 31 14:44:53 crc kubenswrapper[4751]: E0131 14:44:53.665957 4751 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4nf7x,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-nfjx5_openshift-marketplace(e771b68a-beea-4c8b-a085-b869155ca20d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 31 14:44:53 crc kubenswrapper[4751]: E0131 14:44:53.667501 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-nfjx5" podUID="e771b68a-beea-4c8b-a085-b869155ca20d" Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.875806 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-xtn6l"] Jan 31 14:44:53 crc kubenswrapper[4751]: W0131 14:44:53.881364 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod68aeb9c7_d3c3_4c34_96ab_bb947421c504.slice/crio-e3f878d12e7733c635007689de1ae2b125573c090c2c8d57308fa277204993ab WatchSource:0}: Error finding container e3f878d12e7733c635007689de1ae2b125573c090c2c8d57308fa277204993ab: Status 404 returned error can't find the container with id e3f878d12e7733c635007689de1ae2b125573c090c2c8d57308fa277204993ab Jan 31 14:44:53 crc kubenswrapper[4751]: I0131 14:44:53.979212 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f"] Jan 31 14:44:53 crc kubenswrapper[4751]: W0131 14:44:53.982456 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod78cdf25f_daec_4cd7_8954_1fef6f3727db.slice/crio-970e5859fee2ce045474246c69d85b582bd206939166d1d5d70f07913b640d31 WatchSource:0}: Error finding container 970e5859fee2ce045474246c69d85b582bd206939166d1d5d70f07913b640d31: Status 404 returned error can't find the container with id 970e5859fee2ce045474246c69d85b582bd206939166d1d5d70f07913b640d31 Jan 31 14:44:54 crc kubenswrapper[4751]: I0131 14:44:54.130852 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc"] Jan 31 14:44:54 crc kubenswrapper[4751]: W0131 14:44:54.134130 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4ee27ad5_3acb_4388_a964_3b526b79e776.slice/crio-6ffb197e3834ae2da47e7ac1e9c5209aca77770c8ec1ba80483d9a0a51243fd6 WatchSource:0}: Error finding container 6ffb197e3834ae2da47e7ac1e9c5209aca77770c8ec1ba80483d9a0a51243fd6: Status 404 returned error can't find the container with id 6ffb197e3834ae2da47e7ac1e9c5209aca77770c8ec1ba80483d9a0a51243fd6 Jan 31 14:44:54 crc kubenswrapper[4751]: I0131 14:44:54.135510 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 31 14:44:54 crc kubenswrapper[4751]: I0131 14:44:54.139365 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 31 14:44:54 crc kubenswrapper[4751]: I0131 14:44:54.352701 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ccfa0c88-7f51-4d85-8a49-e05865c6a06e","Type":"ContainerStarted","Data":"ca8ebba9df4a8c9712a669a8d97759aea5c95bd694f2cead6b4521af30eb8469"} Jan 31 14:44:54 crc kubenswrapper[4751]: I0131 14:44:54.354167 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" event={"ID":"7689427f-2c92-4b56-9617-1139504142ee","Type":"ContainerStarted","Data":"53e8f013a379679cef6168fde6d18706b1593479aa88c48f6b833cf1be744d64"} Jan 31 14:44:54 crc kubenswrapper[4751]: I0131 14:44:54.355191 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4ee27ad5-3acb-4388-a964-3b526b79e776","Type":"ContainerStarted","Data":"6ffb197e3834ae2da47e7ac1e9c5209aca77770c8ec1ba80483d9a0a51243fd6"} Jan 31 14:44:54 crc kubenswrapper[4751]: I0131 14:44:54.356297 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" event={"ID":"78cdf25f-daec-4cd7-8954-1fef6f3727db","Type":"ContainerStarted","Data":"970e5859fee2ce045474246c69d85b582bd206939166d1d5d70f07913b640d31"} Jan 31 14:44:54 crc kubenswrapper[4751]: I0131 14:44:54.357404 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" event={"ID":"68aeb9c7-d3c3-4c34-96ab-bb947421c504","Type":"ContainerStarted","Data":"e3f878d12e7733c635007689de1ae2b125573c090c2c8d57308fa277204993ab"} Jan 31 14:44:54 crc kubenswrapper[4751]: E0131 14:44:54.361756 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-nfjx5" podUID="e771b68a-beea-4c8b-a085-b869155ca20d" Jan 31 14:44:54 crc kubenswrapper[4751]: E0131 14:44:54.365459 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-gktqp" podUID="0cfb2e52-7371-4d38-994c-92b5b7d123cc" Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.363963 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7j7f" event={"ID":"f614f9ab-b5e2-4548-93e7-571d1ffb57b0","Type":"ContainerStarted","Data":"f9bdddf94b5f6d16e3861f0fec527d5909cdc3d4c12d6d71c61a9d592a18874f"} Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.366045 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" event={"ID":"78cdf25f-daec-4cd7-8954-1fef6f3727db","Type":"ContainerStarted","Data":"d6fcf4545fc743b58a8d591ee8c96761c9cf12c0b3c6f1393a495a38267ab81b"} Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.366222 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.369221 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" event={"ID":"68aeb9c7-d3c3-4c34-96ab-bb947421c504","Type":"ContainerStarted","Data":"6bc10453d43a1e5be2ec99bfe8bab5eef283d3e7ba32bb938b4f299d4b7611e7"} Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.372317 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ccfa0c88-7f51-4d85-8a49-e05865c6a06e","Type":"ContainerStarted","Data":"8b4d59b21d9818b51f757f56dda578d9b5e64551b0acae90d2098c728b3290ee"} Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.373286 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.376377 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" event={"ID":"7689427f-2c92-4b56-9617-1139504142ee","Type":"ContainerStarted","Data":"5ab0b26976de57e2a1cac017e1118160bad1efe8234f246fb4610cad0bf8fe55"} Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.377004 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.379887 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4ee27ad5-3acb-4388-a964-3b526b79e776","Type":"ContainerStarted","Data":"088b308890b05df5e2b8d2107eb926cdcdbce0d50a37541483e97b4f64b46c2c"} Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.381496 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.382795 4751 generic.go:334] "Generic (PLEG): container finished" podID="074619b7-9220-4377-b93d-6088199a5e16" containerID="a757fc9386532749c4b360530fb36362a62f17d343908433db3d64555171c0b9" exitCode=0 Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.382829 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcnsn" event={"ID":"074619b7-9220-4377-b93d-6088199a5e16","Type":"ContainerDied","Data":"a757fc9386532749c4b360530fb36362a62f17d343908433db3d64555171c0b9"} Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.404693 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=6.404679202 podStartE2EDuration="6.404679202s" podCreationTimestamp="2026-01-31 14:44:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:55.404398214 +0000 UTC m=+199.779111109" watchObservedRunningTime="2026-01-31 14:44:55.404679202 +0000 UTC m=+199.779392087" Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.426657 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" podStartSLOduration=12.426636969 podStartE2EDuration="12.426636969s" podCreationTimestamp="2026-01-31 14:44:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:55.423006443 +0000 UTC m=+199.797719338" watchObservedRunningTime="2026-01-31 14:44:55.426636969 +0000 UTC m=+199.801349864" Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.466763 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=10.466713182 podStartE2EDuration="10.466713182s" podCreationTimestamp="2026-01-31 14:44:45 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:55.461140495 +0000 UTC m=+199.835853380" watchObservedRunningTime="2026-01-31 14:44:55.466713182 +0000 UTC m=+199.841426077" Jan 31 14:44:55 crc kubenswrapper[4751]: I0131 14:44:55.484326 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" podStartSLOduration=11.484306454 podStartE2EDuration="11.484306454s" podCreationTimestamp="2026-01-31 14:44:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:55.477939947 +0000 UTC m=+199.852652832" watchObservedRunningTime="2026-01-31 14:44:55.484306454 +0000 UTC m=+199.859019339" Jan 31 14:44:56 crc kubenswrapper[4751]: I0131 14:44:56.392743 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-xtn6l" event={"ID":"68aeb9c7-d3c3-4c34-96ab-bb947421c504","Type":"ContainerStarted","Data":"60bab24f3dc5e2c7363b1ca62b341cb3c2ce6d95eb14311354c74fe4b027b247"} Jan 31 14:44:56 crc kubenswrapper[4751]: I0131 14:44:56.423104 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-xtn6l" podStartSLOduration=175.423088194 podStartE2EDuration="2m55.423088194s" podCreationTimestamp="2026-01-31 14:42:01 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:44:56.420748542 +0000 UTC m=+200.795461437" watchObservedRunningTime="2026-01-31 14:44:56.423088194 +0000 UTC m=+200.797801089" Jan 31 14:44:59 crc kubenswrapper[4751]: I0131 14:44:59.420980 4751 generic.go:334] "Generic (PLEG): container finished" podID="4ee27ad5-3acb-4388-a964-3b526b79e776" containerID="088b308890b05df5e2b8d2107eb926cdcdbce0d50a37541483e97b4f64b46c2c" exitCode=0 Jan 31 14:44:59 crc kubenswrapper[4751]: I0131 14:44:59.421146 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4ee27ad5-3acb-4388-a964-3b526b79e776","Type":"ContainerDied","Data":"088b308890b05df5e2b8d2107eb926cdcdbce0d50a37541483e97b4f64b46c2c"} Jan 31 14:44:59 crc kubenswrapper[4751]: I0131 14:44:59.425253 4751 generic.go:334] "Generic (PLEG): container finished" podID="f614f9ab-b5e2-4548-93e7-571d1ffb57b0" containerID="f9bdddf94b5f6d16e3861f0fec527d5909cdc3d4c12d6d71c61a9d592a18874f" exitCode=0 Jan 31 14:44:59 crc kubenswrapper[4751]: I0131 14:44:59.425325 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7j7f" event={"ID":"f614f9ab-b5e2-4548-93e7-571d1ffb57b0","Type":"ContainerDied","Data":"f9bdddf94b5f6d16e3861f0fec527d5909cdc3d4c12d6d71c61a9d592a18874f"} Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.150703 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr"] Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.152311 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.157226 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.157261 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.171669 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr"] Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.266340 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a7534cc-afa8-4cd1-acb0-e4269e55316b-secret-volume\") pod \"collect-profiles-29497845-gwlpr\" (UID: \"7a7534cc-afa8-4cd1-acb0-e4269e55316b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.266449 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd4xf\" (UniqueName: \"kubernetes.io/projected/7a7534cc-afa8-4cd1-acb0-e4269e55316b-kube-api-access-pd4xf\") pod \"collect-profiles-29497845-gwlpr\" (UID: \"7a7534cc-afa8-4cd1-acb0-e4269e55316b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.266839 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a7534cc-afa8-4cd1-acb0-e4269e55316b-config-volume\") pod \"collect-profiles-29497845-gwlpr\" (UID: \"7a7534cc-afa8-4cd1-acb0-e4269e55316b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.368981 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a7534cc-afa8-4cd1-acb0-e4269e55316b-secret-volume\") pod \"collect-profiles-29497845-gwlpr\" (UID: \"7a7534cc-afa8-4cd1-acb0-e4269e55316b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.369065 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd4xf\" (UniqueName: \"kubernetes.io/projected/7a7534cc-afa8-4cd1-acb0-e4269e55316b-kube-api-access-pd4xf\") pod \"collect-profiles-29497845-gwlpr\" (UID: \"7a7534cc-afa8-4cd1-acb0-e4269e55316b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.369119 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a7534cc-afa8-4cd1-acb0-e4269e55316b-config-volume\") pod \"collect-profiles-29497845-gwlpr\" (UID: \"7a7534cc-afa8-4cd1-acb0-e4269e55316b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.370344 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a7534cc-afa8-4cd1-acb0-e4269e55316b-config-volume\") pod \"collect-profiles-29497845-gwlpr\" (UID: \"7a7534cc-afa8-4cd1-acb0-e4269e55316b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.376522 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a7534cc-afa8-4cd1-acb0-e4269e55316b-secret-volume\") pod \"collect-profiles-29497845-gwlpr\" (UID: \"7a7534cc-afa8-4cd1-acb0-e4269e55316b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.394127 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd4xf\" (UniqueName: \"kubernetes.io/projected/7a7534cc-afa8-4cd1-acb0-e4269e55316b-kube-api-access-pd4xf\") pod \"collect-profiles-29497845-gwlpr\" (UID: \"7a7534cc-afa8-4cd1-acb0-e4269e55316b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.432054 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcnsn" event={"ID":"074619b7-9220-4377-b93d-6088199a5e16","Type":"ContainerStarted","Data":"362555e9a4bda60e895d4cff8fad32fbdb6800b24c4a8d8deeb2ac026aebcc1b"} Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.450200 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-wcnsn" podStartSLOduration=3.477883351 podStartE2EDuration="54.450184338s" podCreationTimestamp="2026-01-31 14:44:06 +0000 UTC" firstStartedPulling="2026-01-31 14:44:07.791011553 +0000 UTC m=+152.165724438" lastFinishedPulling="2026-01-31 14:44:58.76331253 +0000 UTC m=+203.138025425" observedRunningTime="2026-01-31 14:45:00.448575515 +0000 UTC m=+204.823288410" watchObservedRunningTime="2026-01-31 14:45:00.450184338 +0000 UTC m=+204.824897223" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.521379 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.779913 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.875821 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ee27ad5-3acb-4388-a964-3b526b79e776-kubelet-dir\") pod \"4ee27ad5-3acb-4388-a964-3b526b79e776\" (UID: \"4ee27ad5-3acb-4388-a964-3b526b79e776\") " Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.875915 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee27ad5-3acb-4388-a964-3b526b79e776-kube-api-access\") pod \"4ee27ad5-3acb-4388-a964-3b526b79e776\" (UID: \"4ee27ad5-3acb-4388-a964-3b526b79e776\") " Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.875937 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4ee27ad5-3acb-4388-a964-3b526b79e776-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4ee27ad5-3acb-4388-a964-3b526b79e776" (UID: "4ee27ad5-3acb-4388-a964-3b526b79e776"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.876204 4751 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4ee27ad5-3acb-4388-a964-3b526b79e776-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.882259 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ee27ad5-3acb-4388-a964-3b526b79e776-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4ee27ad5-3acb-4388-a964-3b526b79e776" (UID: "4ee27ad5-3acb-4388-a964-3b526b79e776"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:45:00 crc kubenswrapper[4751]: I0131 14:45:00.977223 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4ee27ad5-3acb-4388-a964-3b526b79e776-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:01 crc kubenswrapper[4751]: I0131 14:45:01.138806 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr"] Jan 31 14:45:01 crc kubenswrapper[4751]: W0131 14:45:01.182034 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a7534cc_afa8_4cd1_acb0_e4269e55316b.slice/crio-7610769ab3f7dc50064969e1696261c3c40fd1cfc71d4e432425c19f9e85417a WatchSource:0}: Error finding container 7610769ab3f7dc50064969e1696261c3c40fd1cfc71d4e432425c19f9e85417a: Status 404 returned error can't find the container with id 7610769ab3f7dc50064969e1696261c3c40fd1cfc71d4e432425c19f9e85417a Jan 31 14:45:01 crc kubenswrapper[4751]: I0131 14:45:01.437840 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"4ee27ad5-3acb-4388-a964-3b526b79e776","Type":"ContainerDied","Data":"6ffb197e3834ae2da47e7ac1e9c5209aca77770c8ec1ba80483d9a0a51243fd6"} Jan 31 14:45:01 crc kubenswrapper[4751]: I0131 14:45:01.438147 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ffb197e3834ae2da47e7ac1e9c5209aca77770c8ec1ba80483d9a0a51243fd6" Jan 31 14:45:01 crc kubenswrapper[4751]: I0131 14:45:01.437896 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 31 14:45:01 crc kubenswrapper[4751]: I0131 14:45:01.446632 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" event={"ID":"7a7534cc-afa8-4cd1-acb0-e4269e55316b","Type":"ContainerStarted","Data":"14e3784ceb4cadc39980cddb4a29a7503a31cfa20643871453d5f7d2495d2d0d"} Jan 31 14:45:01 crc kubenswrapper[4751]: I0131 14:45:01.446816 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" event={"ID":"7a7534cc-afa8-4cd1-acb0-e4269e55316b","Type":"ContainerStarted","Data":"7610769ab3f7dc50064969e1696261c3c40fd1cfc71d4e432425c19f9e85417a"} Jan 31 14:45:01 crc kubenswrapper[4751]: I0131 14:45:01.449574 4751 generic.go:334] "Generic (PLEG): container finished" podID="d5c0f5c8-cecf-451f-abef-bf357716eb71" containerID="ec5579b0b5c03bbe363906a09e9b8073fa04eb6f15f0254accde5725abd7492c" exitCode=0 Jan 31 14:45:01 crc kubenswrapper[4751]: I0131 14:45:01.449662 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2lx" event={"ID":"d5c0f5c8-cecf-451f-abef-bf357716eb71","Type":"ContainerDied","Data":"ec5579b0b5c03bbe363906a09e9b8073fa04eb6f15f0254accde5725abd7492c"} Jan 31 14:45:01 crc kubenswrapper[4751]: I0131 14:45:01.454885 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7j7f" event={"ID":"f614f9ab-b5e2-4548-93e7-571d1ffb57b0","Type":"ContainerStarted","Data":"060b3a3ddee4e9105f7ffb6a1ce801e4a26650b10b65707cadc77226dc18ea06"} Jan 31 14:45:01 crc kubenswrapper[4751]: I0131 14:45:01.495809 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" podStartSLOduration=1.495781953 podStartE2EDuration="1.495781953s" podCreationTimestamp="2026-01-31 14:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:45:01.467459299 +0000 UTC m=+205.842172194" watchObservedRunningTime="2026-01-31 14:45:01.495781953 +0000 UTC m=+205.870494838" Jan 31 14:45:01 crc kubenswrapper[4751]: I0131 14:45:01.496221 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-s7j7f" podStartSLOduration=4.663128504 podStartE2EDuration="52.496215074s" podCreationTimestamp="2026-01-31 14:44:09 +0000 UTC" firstStartedPulling="2026-01-31 14:44:13.055692183 +0000 UTC m=+157.430405068" lastFinishedPulling="2026-01-31 14:45:00.888778723 +0000 UTC m=+205.263491638" observedRunningTime="2026-01-31 14:45:01.489510898 +0000 UTC m=+205.864223793" watchObservedRunningTime="2026-01-31 14:45:01.496215074 +0000 UTC m=+205.870927959" Jan 31 14:45:02 crc kubenswrapper[4751]: I0131 14:45:02.461026 4751 generic.go:334] "Generic (PLEG): container finished" podID="7a7534cc-afa8-4cd1-acb0-e4269e55316b" containerID="14e3784ceb4cadc39980cddb4a29a7503a31cfa20643871453d5f7d2495d2d0d" exitCode=0 Jan 31 14:45:02 crc kubenswrapper[4751]: I0131 14:45:02.461098 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" event={"ID":"7a7534cc-afa8-4cd1-acb0-e4269e55316b","Type":"ContainerDied","Data":"14e3784ceb4cadc39980cddb4a29a7503a31cfa20643871453d5f7d2495d2d0d"} Jan 31 14:45:02 crc kubenswrapper[4751]: I0131 14:45:02.464795 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2lx" event={"ID":"d5c0f5c8-cecf-451f-abef-bf357716eb71","Type":"ContainerStarted","Data":"9543af933d7edd0ace2113eee9860f1097d0ebff37536026931c946593943f0a"} Jan 31 14:45:02 crc kubenswrapper[4751]: I0131 14:45:02.466819 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lq4t" event={"ID":"c447796d-48ac-4eeb-8fe6-ad411966b3d3","Type":"ContainerStarted","Data":"6ee8fb8b12b6ee8cd20623ca96e7a87fc43d879e6d76a01bc3e55e235825e807"} Jan 31 14:45:02 crc kubenswrapper[4751]: I0131 14:45:02.518936 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ln2lx" podStartSLOduration=2.384002687 podStartE2EDuration="56.518917329s" podCreationTimestamp="2026-01-31 14:44:06 +0000 UTC" firstStartedPulling="2026-01-31 14:44:07.802103636 +0000 UTC m=+152.176816521" lastFinishedPulling="2026-01-31 14:45:01.937018268 +0000 UTC m=+206.311731163" observedRunningTime="2026-01-31 14:45:02.517938693 +0000 UTC m=+206.892651588" watchObservedRunningTime="2026-01-31 14:45:02.518917329 +0000 UTC m=+206.893630214" Jan 31 14:45:03 crc kubenswrapper[4751]: I0131 14:45:03.473592 4751 generic.go:334] "Generic (PLEG): container finished" podID="c447796d-48ac-4eeb-8fe6-ad411966b3d3" containerID="6ee8fb8b12b6ee8cd20623ca96e7a87fc43d879e6d76a01bc3e55e235825e807" exitCode=0 Jan 31 14:45:03 crc kubenswrapper[4751]: I0131 14:45:03.473676 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lq4t" event={"ID":"c447796d-48ac-4eeb-8fe6-ad411966b3d3","Type":"ContainerDied","Data":"6ee8fb8b12b6ee8cd20623ca96e7a87fc43d879e6d76a01bc3e55e235825e807"} Jan 31 14:45:03 crc kubenswrapper[4751]: I0131 14:45:03.790029 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" Jan 31 14:45:03 crc kubenswrapper[4751]: I0131 14:45:03.932697 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a7534cc-afa8-4cd1-acb0-e4269e55316b-secret-volume\") pod \"7a7534cc-afa8-4cd1-acb0-e4269e55316b\" (UID: \"7a7534cc-afa8-4cd1-acb0-e4269e55316b\") " Jan 31 14:45:03 crc kubenswrapper[4751]: I0131 14:45:03.933272 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a7534cc-afa8-4cd1-acb0-e4269e55316b-config-volume\") pod \"7a7534cc-afa8-4cd1-acb0-e4269e55316b\" (UID: \"7a7534cc-afa8-4cd1-acb0-e4269e55316b\") " Jan 31 14:45:03 crc kubenswrapper[4751]: I0131 14:45:03.933613 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pd4xf\" (UniqueName: \"kubernetes.io/projected/7a7534cc-afa8-4cd1-acb0-e4269e55316b-kube-api-access-pd4xf\") pod \"7a7534cc-afa8-4cd1-acb0-e4269e55316b\" (UID: \"7a7534cc-afa8-4cd1-acb0-e4269e55316b\") " Jan 31 14:45:03 crc kubenswrapper[4751]: I0131 14:45:03.934104 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a7534cc-afa8-4cd1-acb0-e4269e55316b-config-volume" (OuterVolumeSpecName: "config-volume") pod "7a7534cc-afa8-4cd1-acb0-e4269e55316b" (UID: "7a7534cc-afa8-4cd1-acb0-e4269e55316b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:45:03 crc kubenswrapper[4751]: I0131 14:45:03.939923 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a7534cc-afa8-4cd1-acb0-e4269e55316b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7a7534cc-afa8-4cd1-acb0-e4269e55316b" (UID: "7a7534cc-afa8-4cd1-acb0-e4269e55316b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:45:03 crc kubenswrapper[4751]: I0131 14:45:03.948035 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a7534cc-afa8-4cd1-acb0-e4269e55316b-kube-api-access-pd4xf" (OuterVolumeSpecName: "kube-api-access-pd4xf") pod "7a7534cc-afa8-4cd1-acb0-e4269e55316b" (UID: "7a7534cc-afa8-4cd1-acb0-e4269e55316b"). InnerVolumeSpecName "kube-api-access-pd4xf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:45:04 crc kubenswrapper[4751]: I0131 14:45:04.035682 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a7534cc-afa8-4cd1-acb0-e4269e55316b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:04 crc kubenswrapper[4751]: I0131 14:45:04.035716 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a7534cc-afa8-4cd1-acb0-e4269e55316b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:04 crc kubenswrapper[4751]: I0131 14:45:04.035726 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pd4xf\" (UniqueName: \"kubernetes.io/projected/7a7534cc-afa8-4cd1-acb0-e4269e55316b-kube-api-access-pd4xf\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:04 crc kubenswrapper[4751]: I0131 14:45:04.480843 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" event={"ID":"7a7534cc-afa8-4cd1-acb0-e4269e55316b","Type":"ContainerDied","Data":"7610769ab3f7dc50064969e1696261c3c40fd1cfc71d4e432425c19f9e85417a"} Jan 31 14:45:04 crc kubenswrapper[4751]: I0131 14:45:04.481877 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7610769ab3f7dc50064969e1696261c3c40fd1cfc71d4e432425c19f9e85417a" Jan 31 14:45:04 crc kubenswrapper[4751]: I0131 14:45:04.480881 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497845-gwlpr" Jan 31 14:45:04 crc kubenswrapper[4751]: I0131 14:45:04.482730 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lq4t" event={"ID":"c447796d-48ac-4eeb-8fe6-ad411966b3d3","Type":"ContainerStarted","Data":"cd00a1023a77b74abf41fc259fe9b2a475f7ada25dc1b39aa83f13c57794ccfd"} Jan 31 14:45:04 crc kubenswrapper[4751]: I0131 14:45:04.509365 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2lq4t" podStartSLOduration=2.334571377 podStartE2EDuration="58.509345464s" podCreationTimestamp="2026-01-31 14:44:06 +0000 UTC" firstStartedPulling="2026-01-31 14:44:07.795598224 +0000 UTC m=+152.170311109" lastFinishedPulling="2026-01-31 14:45:03.970372311 +0000 UTC m=+208.345085196" observedRunningTime="2026-01-31 14:45:04.508687687 +0000 UTC m=+208.883400602" watchObservedRunningTime="2026-01-31 14:45:04.509345464 +0000 UTC m=+208.884058359" Jan 31 14:45:06 crc kubenswrapper[4751]: I0131 14:45:06.439856 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:45:06 crc kubenswrapper[4751]: I0131 14:45:06.440920 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:45:06 crc kubenswrapper[4751]: I0131 14:45:06.502958 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gktqp" event={"ID":"0cfb2e52-7371-4d38-994c-92b5b7d123cc","Type":"ContainerStarted","Data":"0fd8fddc66836c1f7f3a6139ad51fa7a751ab965677515d2805ae4e6c08a2765"} Jan 31 14:45:06 crc kubenswrapper[4751]: I0131 14:45:06.830921 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:45:06 crc kubenswrapper[4751]: I0131 14:45:06.832640 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:45:06 crc kubenswrapper[4751]: I0131 14:45:06.905808 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:45:06 crc kubenswrapper[4751]: I0131 14:45:06.906489 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:45:06 crc kubenswrapper[4751]: I0131 14:45:06.957839 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:45:07 crc kubenswrapper[4751]: I0131 14:45:07.006061 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:45:07 crc kubenswrapper[4751]: I0131 14:45:07.006685 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:45:07 crc kubenswrapper[4751]: I0131 14:45:07.059884 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:45:07 crc kubenswrapper[4751]: I0131 14:45:07.513627 4751 generic.go:334] "Generic (PLEG): container finished" podID="e656c7af-fbd9-4e9c-ae61-d4142d37c89f" containerID="874aebfb442c94d60aaad947db92520e6e5ff745ee226afefd00dd9dc85cb564" exitCode=0 Jan 31 14:45:07 crc kubenswrapper[4751]: I0131 14:45:07.513909 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2xfl" event={"ID":"e656c7af-fbd9-4e9c-ae61-d4142d37c89f","Type":"ContainerDied","Data":"874aebfb442c94d60aaad947db92520e6e5ff745ee226afefd00dd9dc85cb564"} Jan 31 14:45:07 crc kubenswrapper[4751]: I0131 14:45:07.519907 4751 generic.go:334] "Generic (PLEG): container finished" podID="0cfb2e52-7371-4d38-994c-92b5b7d123cc" containerID="0fd8fddc66836c1f7f3a6139ad51fa7a751ab965677515d2805ae4e6c08a2765" exitCode=0 Jan 31 14:45:07 crc kubenswrapper[4751]: I0131 14:45:07.520018 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gktqp" event={"ID":"0cfb2e52-7371-4d38-994c-92b5b7d123cc","Type":"ContainerDied","Data":"0fd8fddc66836c1f7f3a6139ad51fa7a751ab965677515d2805ae4e6c08a2765"} Jan 31 14:45:07 crc kubenswrapper[4751]: I0131 14:45:07.523524 4751 generic.go:334] "Generic (PLEG): container finished" podID="e771b68a-beea-4c8b-a085-b869155ca20d" containerID="449419de3999925adbffe064ed4c6d253fd8062b2c5e50eef6d0e389bc5f1a74" exitCode=0 Jan 31 14:45:07 crc kubenswrapper[4751]: I0131 14:45:07.523689 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfjx5" event={"ID":"e771b68a-beea-4c8b-a085-b869155ca20d","Type":"ContainerDied","Data":"449419de3999925adbffe064ed4c6d253fd8062b2c5e50eef6d0e389bc5f1a74"} Jan 31 14:45:07 crc kubenswrapper[4751]: I0131 14:45:07.583586 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:45:08 crc kubenswrapper[4751]: I0131 14:45:08.529781 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gktqp" event={"ID":"0cfb2e52-7371-4d38-994c-92b5b7d123cc","Type":"ContainerStarted","Data":"632dd7cf21c157e19b8b506aada8a2a0cc9ee7a4c7089d92374fc5dc9f67b1ea"} Jan 31 14:45:08 crc kubenswrapper[4751]: I0131 14:45:08.534313 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfjx5" event={"ID":"e771b68a-beea-4c8b-a085-b869155ca20d","Type":"ContainerStarted","Data":"3b08935560f35380fa53730c9bdd64653cc7a118a7c4b8bd45e5eeddbd415e24"} Jan 31 14:45:08 crc kubenswrapper[4751]: I0131 14:45:08.538287 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2xfl" event={"ID":"e656c7af-fbd9-4e9c-ae61-d4142d37c89f","Type":"ContainerStarted","Data":"3bb7101aeb47dd5d5b9aa6ef1075e32a424c360c1ebaa7fd0787c20e4303f647"} Jan 31 14:45:08 crc kubenswrapper[4751]: I0131 14:45:08.552290 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-gktqp" podStartSLOduration=2.199002408 podStartE2EDuration="59.552274663s" podCreationTimestamp="2026-01-31 14:44:09 +0000 UTC" firstStartedPulling="2026-01-31 14:44:10.916982288 +0000 UTC m=+155.291695173" lastFinishedPulling="2026-01-31 14:45:08.270254543 +0000 UTC m=+212.644967428" observedRunningTime="2026-01-31 14:45:08.550594018 +0000 UTC m=+212.925306903" watchObservedRunningTime="2026-01-31 14:45:08.552274663 +0000 UTC m=+212.926987548" Jan 31 14:45:08 crc kubenswrapper[4751]: I0131 14:45:08.571264 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-nfjx5" podStartSLOduration=2.338379742 podStartE2EDuration="1m0.571245271s" podCreationTimestamp="2026-01-31 14:44:08 +0000 UTC" firstStartedPulling="2026-01-31 14:44:09.899267564 +0000 UTC m=+154.273980449" lastFinishedPulling="2026-01-31 14:45:08.132133063 +0000 UTC m=+212.506845978" observedRunningTime="2026-01-31 14:45:08.56855237 +0000 UTC m=+212.943265255" watchObservedRunningTime="2026-01-31 14:45:08.571245271 +0000 UTC m=+212.945958176" Jan 31 14:45:08 crc kubenswrapper[4751]: I0131 14:45:08.587080 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-k2xfl" podStartSLOduration=2.318436348 podStartE2EDuration="1m0.587041166s" podCreationTimestamp="2026-01-31 14:44:08 +0000 UTC" firstStartedPulling="2026-01-31 14:44:09.899322416 +0000 UTC m=+154.274035301" lastFinishedPulling="2026-01-31 14:45:08.167927234 +0000 UTC m=+212.542640119" observedRunningTime="2026-01-31 14:45:08.583801881 +0000 UTC m=+212.958514776" watchObservedRunningTime="2026-01-31 14:45:08.587041166 +0000 UTC m=+212.961754051" Jan 31 14:45:08 crc kubenswrapper[4751]: I0131 14:45:08.815920 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:45:08 crc kubenswrapper[4751]: I0131 14:45:08.816258 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:45:08 crc kubenswrapper[4751]: I0131 14:45:08.896811 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:45:08 crc kubenswrapper[4751]: I0131 14:45:08.896871 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:45:08 crc kubenswrapper[4751]: I0131 14:45:08.896915 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:45:08 crc kubenswrapper[4751]: I0131 14:45:08.897535 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2"} pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 14:45:08 crc kubenswrapper[4751]: I0131 14:45:08.897639 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" containerID="cri-o://3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2" gracePeriod=600 Jan 31 14:45:09 crc kubenswrapper[4751]: I0131 14:45:09.545595 4751 generic.go:334] "Generic (PLEG): container finished" podID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerID="3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2" exitCode=0 Jan 31 14:45:09 crc kubenswrapper[4751]: I0131 14:45:09.545674 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerDied","Data":"3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2"} Jan 31 14:45:09 crc kubenswrapper[4751]: I0131 14:45:09.546326 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerStarted","Data":"45cb0d3a062f00471c149bf8e8ee7eaef0df67968aef3870677e63ed898aa00d"} Jan 31 14:45:09 crc kubenswrapper[4751]: I0131 14:45:09.548577 4751 generic.go:334] "Generic (PLEG): container finished" podID="8d5f1383-42d7-47a1-9e47-8dba038241d2" containerID="c0f53c12a6e17e599de6a624dae5a0ba532d7e88bc9baf9838475b082d03f347" exitCode=0 Jan 31 14:45:09 crc kubenswrapper[4751]: I0131 14:45:09.548649 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4m6r" event={"ID":"8d5f1383-42d7-47a1-9e47-8dba038241d2","Type":"ContainerDied","Data":"c0f53c12a6e17e599de6a624dae5a0ba532d7e88bc9baf9838475b082d03f347"} Jan 31 14:45:09 crc kubenswrapper[4751]: I0131 14:45:09.589050 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:45:09 crc kubenswrapper[4751]: I0131 14:45:09.589116 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:45:09 crc kubenswrapper[4751]: I0131 14:45:09.773985 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ln2lx"] Jan 31 14:45:09 crc kubenswrapper[4751]: I0131 14:45:09.774250 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ln2lx" podUID="d5c0f5c8-cecf-451f-abef-bf357716eb71" containerName="registry-server" containerID="cri-o://9543af933d7edd0ace2113eee9860f1097d0ebff37536026931c946593943f0a" gracePeriod=2 Jan 31 14:45:09 crc kubenswrapper[4751]: I0131 14:45:09.850616 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-nfjx5" podUID="e771b68a-beea-4c8b-a085-b869155ca20d" containerName="registry-server" probeResult="failure" output=< Jan 31 14:45:09 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 31 14:45:09 crc kubenswrapper[4751]: > Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.000405 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.000448 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.067147 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.264399 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.328475 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c0f5c8-cecf-451f-abef-bf357716eb71-catalog-content\") pod \"d5c0f5c8-cecf-451f-abef-bf357716eb71\" (UID: \"d5c0f5c8-cecf-451f-abef-bf357716eb71\") " Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.328583 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xd7bp\" (UniqueName: \"kubernetes.io/projected/d5c0f5c8-cecf-451f-abef-bf357716eb71-kube-api-access-xd7bp\") pod \"d5c0f5c8-cecf-451f-abef-bf357716eb71\" (UID: \"d5c0f5c8-cecf-451f-abef-bf357716eb71\") " Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.328621 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c0f5c8-cecf-451f-abef-bf357716eb71-utilities\") pod \"d5c0f5c8-cecf-451f-abef-bf357716eb71\" (UID: \"d5c0f5c8-cecf-451f-abef-bf357716eb71\") " Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.329893 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5c0f5c8-cecf-451f-abef-bf357716eb71-utilities" (OuterVolumeSpecName: "utilities") pod "d5c0f5c8-cecf-451f-abef-bf357716eb71" (UID: "d5c0f5c8-cecf-451f-abef-bf357716eb71"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.334909 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5c0f5c8-cecf-451f-abef-bf357716eb71-kube-api-access-xd7bp" (OuterVolumeSpecName: "kube-api-access-xd7bp") pod "d5c0f5c8-cecf-451f-abef-bf357716eb71" (UID: "d5c0f5c8-cecf-451f-abef-bf357716eb71"). InnerVolumeSpecName "kube-api-access-xd7bp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.402533 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d5c0f5c8-cecf-451f-abef-bf357716eb71-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d5c0f5c8-cecf-451f-abef-bf357716eb71" (UID: "d5c0f5c8-cecf-451f-abef-bf357716eb71"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.435028 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d5c0f5c8-cecf-451f-abef-bf357716eb71-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.435084 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xd7bp\" (UniqueName: \"kubernetes.io/projected/d5c0f5c8-cecf-451f-abef-bf357716eb71-kube-api-access-xd7bp\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.435111 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d5c0f5c8-cecf-451f-abef-bf357716eb71-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.443573 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-xr2gt"] Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.556277 4751 generic.go:334] "Generic (PLEG): container finished" podID="d5c0f5c8-cecf-451f-abef-bf357716eb71" containerID="9543af933d7edd0ace2113eee9860f1097d0ebff37536026931c946593943f0a" exitCode=0 Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.556353 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ln2lx" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.556374 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2lx" event={"ID":"d5c0f5c8-cecf-451f-abef-bf357716eb71","Type":"ContainerDied","Data":"9543af933d7edd0ace2113eee9860f1097d0ebff37536026931c946593943f0a"} Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.557673 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ln2lx" event={"ID":"d5c0f5c8-cecf-451f-abef-bf357716eb71","Type":"ContainerDied","Data":"a9f4794a6036dec4476c4be7ee2587554c0cf25782f49b8a2635038cb9771dcf"} Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.557715 4751 scope.go:117] "RemoveContainer" containerID="9543af933d7edd0ace2113eee9860f1097d0ebff37536026931c946593943f0a" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.560649 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4m6r" event={"ID":"8d5f1383-42d7-47a1-9e47-8dba038241d2","Type":"ContainerStarted","Data":"eabca8f8fcdbfb2f04b488498b2a615e9946a5ba739f9fb75c570ef168f4bcd8"} Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.576123 4751 scope.go:117] "RemoveContainer" containerID="ec5579b0b5c03bbe363906a09e9b8073fa04eb6f15f0254accde5725abd7492c" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.594659 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-m4m6r" podStartSLOduration=2.199564629 podStartE2EDuration="1m4.594641732s" podCreationTimestamp="2026-01-31 14:44:06 +0000 UTC" firstStartedPulling="2026-01-31 14:44:07.792618566 +0000 UTC m=+152.167331451" lastFinishedPulling="2026-01-31 14:45:10.187695669 +0000 UTC m=+214.562408554" observedRunningTime="2026-01-31 14:45:10.583319895 +0000 UTC m=+214.958032780" watchObservedRunningTime="2026-01-31 14:45:10.594641732 +0000 UTC m=+214.969354617" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.598347 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ln2lx"] Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.606368 4751 scope.go:117] "RemoveContainer" containerID="4b70b0f5c40fae7241cf1b33c7ddc52732dc42394eac071686d9ade2daf20d08" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.610127 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ln2lx"] Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.615776 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.634260 4751 scope.go:117] "RemoveContainer" containerID="9543af933d7edd0ace2113eee9860f1097d0ebff37536026931c946593943f0a" Jan 31 14:45:10 crc kubenswrapper[4751]: E0131 14:45:10.637965 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9543af933d7edd0ace2113eee9860f1097d0ebff37536026931c946593943f0a\": container with ID starting with 9543af933d7edd0ace2113eee9860f1097d0ebff37536026931c946593943f0a not found: ID does not exist" containerID="9543af933d7edd0ace2113eee9860f1097d0ebff37536026931c946593943f0a" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.638005 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9543af933d7edd0ace2113eee9860f1097d0ebff37536026931c946593943f0a"} err="failed to get container status \"9543af933d7edd0ace2113eee9860f1097d0ebff37536026931c946593943f0a\": rpc error: code = NotFound desc = could not find container \"9543af933d7edd0ace2113eee9860f1097d0ebff37536026931c946593943f0a\": container with ID starting with 9543af933d7edd0ace2113eee9860f1097d0ebff37536026931c946593943f0a not found: ID does not exist" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.638030 4751 scope.go:117] "RemoveContainer" containerID="ec5579b0b5c03bbe363906a09e9b8073fa04eb6f15f0254accde5725abd7492c" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.641139 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-gktqp" podUID="0cfb2e52-7371-4d38-994c-92b5b7d123cc" containerName="registry-server" probeResult="failure" output=< Jan 31 14:45:10 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 31 14:45:10 crc kubenswrapper[4751]: > Jan 31 14:45:10 crc kubenswrapper[4751]: E0131 14:45:10.641169 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ec5579b0b5c03bbe363906a09e9b8073fa04eb6f15f0254accde5725abd7492c\": container with ID starting with ec5579b0b5c03bbe363906a09e9b8073fa04eb6f15f0254accde5725abd7492c not found: ID does not exist" containerID="ec5579b0b5c03bbe363906a09e9b8073fa04eb6f15f0254accde5725abd7492c" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.641227 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ec5579b0b5c03bbe363906a09e9b8073fa04eb6f15f0254accde5725abd7492c"} err="failed to get container status \"ec5579b0b5c03bbe363906a09e9b8073fa04eb6f15f0254accde5725abd7492c\": rpc error: code = NotFound desc = could not find container \"ec5579b0b5c03bbe363906a09e9b8073fa04eb6f15f0254accde5725abd7492c\": container with ID starting with ec5579b0b5c03bbe363906a09e9b8073fa04eb6f15f0254accde5725abd7492c not found: ID does not exist" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.641256 4751 scope.go:117] "RemoveContainer" containerID="4b70b0f5c40fae7241cf1b33c7ddc52732dc42394eac071686d9ade2daf20d08" Jan 31 14:45:10 crc kubenswrapper[4751]: E0131 14:45:10.644217 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4b70b0f5c40fae7241cf1b33c7ddc52732dc42394eac071686d9ade2daf20d08\": container with ID starting with 4b70b0f5c40fae7241cf1b33c7ddc52732dc42394eac071686d9ade2daf20d08 not found: ID does not exist" containerID="4b70b0f5c40fae7241cf1b33c7ddc52732dc42394eac071686d9ade2daf20d08" Jan 31 14:45:10 crc kubenswrapper[4751]: I0131 14:45:10.644263 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4b70b0f5c40fae7241cf1b33c7ddc52732dc42394eac071686d9ade2daf20d08"} err="failed to get container status \"4b70b0f5c40fae7241cf1b33c7ddc52732dc42394eac071686d9ade2daf20d08\": rpc error: code = NotFound desc = could not find container \"4b70b0f5c40fae7241cf1b33c7ddc52732dc42394eac071686d9ade2daf20d08\": container with ID starting with 4b70b0f5c40fae7241cf1b33c7ddc52732dc42394eac071686d9ade2daf20d08 not found: ID does not exist" Jan 31 14:45:12 crc kubenswrapper[4751]: I0131 14:45:12.175202 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7j7f"] Jan 31 14:45:12 crc kubenswrapper[4751]: I0131 14:45:12.413195 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5c0f5c8-cecf-451f-abef-bf357716eb71" path="/var/lib/kubelet/pods/d5c0f5c8-cecf-451f-abef-bf357716eb71/volumes" Jan 31 14:45:12 crc kubenswrapper[4751]: I0131 14:45:12.572563 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-s7j7f" podUID="f614f9ab-b5e2-4548-93e7-571d1ffb57b0" containerName="registry-server" containerID="cri-o://060b3a3ddee4e9105f7ffb6a1ce801e4a26650b10b65707cadc77226dc18ea06" gracePeriod=2 Jan 31 14:45:13 crc kubenswrapper[4751]: I0131 14:45:13.588932 4751 generic.go:334] "Generic (PLEG): container finished" podID="f614f9ab-b5e2-4548-93e7-571d1ffb57b0" containerID="060b3a3ddee4e9105f7ffb6a1ce801e4a26650b10b65707cadc77226dc18ea06" exitCode=0 Jan 31 14:45:13 crc kubenswrapper[4751]: I0131 14:45:13.589015 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7j7f" event={"ID":"f614f9ab-b5e2-4548-93e7-571d1ffb57b0","Type":"ContainerDied","Data":"060b3a3ddee4e9105f7ffb6a1ce801e4a26650b10b65707cadc77226dc18ea06"} Jan 31 14:45:13 crc kubenswrapper[4751]: I0131 14:45:13.829803 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:45:13 crc kubenswrapper[4751]: I0131 14:45:13.980082 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-catalog-content\") pod \"f614f9ab-b5e2-4548-93e7-571d1ffb57b0\" (UID: \"f614f9ab-b5e2-4548-93e7-571d1ffb57b0\") " Jan 31 14:45:13 crc kubenswrapper[4751]: I0131 14:45:13.980147 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-utilities\") pod \"f614f9ab-b5e2-4548-93e7-571d1ffb57b0\" (UID: \"f614f9ab-b5e2-4548-93e7-571d1ffb57b0\") " Jan 31 14:45:13 crc kubenswrapper[4751]: I0131 14:45:13.980320 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hwzm\" (UniqueName: \"kubernetes.io/projected/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-kube-api-access-8hwzm\") pod \"f614f9ab-b5e2-4548-93e7-571d1ffb57b0\" (UID: \"f614f9ab-b5e2-4548-93e7-571d1ffb57b0\") " Jan 31 14:45:13 crc kubenswrapper[4751]: I0131 14:45:13.981524 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-utilities" (OuterVolumeSpecName: "utilities") pod "f614f9ab-b5e2-4548-93e7-571d1ffb57b0" (UID: "f614f9ab-b5e2-4548-93e7-571d1ffb57b0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:45:13 crc kubenswrapper[4751]: I0131 14:45:13.987817 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-kube-api-access-8hwzm" (OuterVolumeSpecName: "kube-api-access-8hwzm") pod "f614f9ab-b5e2-4548-93e7-571d1ffb57b0" (UID: "f614f9ab-b5e2-4548-93e7-571d1ffb57b0"). InnerVolumeSpecName "kube-api-access-8hwzm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:45:14 crc kubenswrapper[4751]: I0131 14:45:14.081831 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hwzm\" (UniqueName: \"kubernetes.io/projected/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-kube-api-access-8hwzm\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:14 crc kubenswrapper[4751]: I0131 14:45:14.082111 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:14 crc kubenswrapper[4751]: I0131 14:45:14.091440 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f614f9ab-b5e2-4548-93e7-571d1ffb57b0" (UID: "f614f9ab-b5e2-4548-93e7-571d1ffb57b0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:45:14 crc kubenswrapper[4751]: I0131 14:45:14.183649 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f614f9ab-b5e2-4548-93e7-571d1ffb57b0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:14 crc kubenswrapper[4751]: I0131 14:45:14.596823 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-s7j7f" event={"ID":"f614f9ab-b5e2-4548-93e7-571d1ffb57b0","Type":"ContainerDied","Data":"7b416007999209b30e30ac3cbb706b9a31917cc6ff3256ae9a397696b89670d4"} Jan 31 14:45:14 crc kubenswrapper[4751]: I0131 14:45:14.596886 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-s7j7f" Jan 31 14:45:14 crc kubenswrapper[4751]: I0131 14:45:14.596908 4751 scope.go:117] "RemoveContainer" containerID="060b3a3ddee4e9105f7ffb6a1ce801e4a26650b10b65707cadc77226dc18ea06" Jan 31 14:45:14 crc kubenswrapper[4751]: I0131 14:45:14.612143 4751 scope.go:117] "RemoveContainer" containerID="f9bdddf94b5f6d16e3861f0fec527d5909cdc3d4c12d6d71c61a9d592a18874f" Jan 31 14:45:14 crc kubenswrapper[4751]: I0131 14:45:14.616235 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-s7j7f"] Jan 31 14:45:14 crc kubenswrapper[4751]: I0131 14:45:14.621523 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-s7j7f"] Jan 31 14:45:14 crc kubenswrapper[4751]: I0131 14:45:14.633545 4751 scope.go:117] "RemoveContainer" containerID="9dea3e4098c379086439a00ba95f58535865ef9c6e3300b004af608a3da30bb4" Jan 31 14:45:16 crc kubenswrapper[4751]: I0131 14:45:16.411960 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f614f9ab-b5e2-4548-93e7-571d1ffb57b0" path="/var/lib/kubelet/pods/f614f9ab-b5e2-4548-93e7-571d1ffb57b0/volumes" Jan 31 14:45:16 crc kubenswrapper[4751]: I0131 14:45:16.635713 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:45:16 crc kubenswrapper[4751]: I0131 14:45:16.635755 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:45:16 crc kubenswrapper[4751]: I0131 14:45:16.694605 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:45:17 crc kubenswrapper[4751]: I0131 14:45:17.053471 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:45:17 crc kubenswrapper[4751]: I0131 14:45:17.652926 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:45:17 crc kubenswrapper[4751]: I0131 14:45:17.974607 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2lq4t"] Jan 31 14:45:17 crc kubenswrapper[4751]: I0131 14:45:17.975153 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-2lq4t" podUID="c447796d-48ac-4eeb-8fe6-ad411966b3d3" containerName="registry-server" containerID="cri-o://cd00a1023a77b74abf41fc259fe9b2a475f7ada25dc1b39aa83f13c57794ccfd" gracePeriod=2 Jan 31 14:45:18 crc kubenswrapper[4751]: I0131 14:45:18.447097 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:45:18 crc kubenswrapper[4751]: I0131 14:45:18.447249 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:45:18 crc kubenswrapper[4751]: I0131 14:45:18.481878 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:45:18 crc kubenswrapper[4751]: E0131 14:45:18.588699 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc447796d_48ac_4eeb_8fe6_ad411966b3d3.slice/crio-conmon-cd00a1023a77b74abf41fc259fe9b2a475f7ada25dc1b39aa83f13c57794ccfd.scope\": RecentStats: unable to find data in memory cache]" Jan 31 14:45:18 crc kubenswrapper[4751]: I0131 14:45:18.620713 4751 generic.go:334] "Generic (PLEG): container finished" podID="c447796d-48ac-4eeb-8fe6-ad411966b3d3" containerID="cd00a1023a77b74abf41fc259fe9b2a475f7ada25dc1b39aa83f13c57794ccfd" exitCode=0 Jan 31 14:45:18 crc kubenswrapper[4751]: I0131 14:45:18.620816 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lq4t" event={"ID":"c447796d-48ac-4eeb-8fe6-ad411966b3d3","Type":"ContainerDied","Data":"cd00a1023a77b74abf41fc259fe9b2a475f7ada25dc1b39aa83f13c57794ccfd"} Jan 31 14:45:18 crc kubenswrapper[4751]: I0131 14:45:18.665352 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:45:18 crc kubenswrapper[4751]: I0131 14:45:18.854624 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:45:18 crc kubenswrapper[4751]: I0131 14:45:18.887756 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.475612 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.552433 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c447796d-48ac-4eeb-8fe6-ad411966b3d3-utilities\") pod \"c447796d-48ac-4eeb-8fe6-ad411966b3d3\" (UID: \"c447796d-48ac-4eeb-8fe6-ad411966b3d3\") " Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.552570 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2g96\" (UniqueName: \"kubernetes.io/projected/c447796d-48ac-4eeb-8fe6-ad411966b3d3-kube-api-access-n2g96\") pod \"c447796d-48ac-4eeb-8fe6-ad411966b3d3\" (UID: \"c447796d-48ac-4eeb-8fe6-ad411966b3d3\") " Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.552620 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c447796d-48ac-4eeb-8fe6-ad411966b3d3-catalog-content\") pod \"c447796d-48ac-4eeb-8fe6-ad411966b3d3\" (UID: \"c447796d-48ac-4eeb-8fe6-ad411966b3d3\") " Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.553403 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c447796d-48ac-4eeb-8fe6-ad411966b3d3-utilities" (OuterVolumeSpecName: "utilities") pod "c447796d-48ac-4eeb-8fe6-ad411966b3d3" (UID: "c447796d-48ac-4eeb-8fe6-ad411966b3d3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.558400 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c447796d-48ac-4eeb-8fe6-ad411966b3d3-kube-api-access-n2g96" (OuterVolumeSpecName: "kube-api-access-n2g96") pod "c447796d-48ac-4eeb-8fe6-ad411966b3d3" (UID: "c447796d-48ac-4eeb-8fe6-ad411966b3d3"). InnerVolumeSpecName "kube-api-access-n2g96". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.594458 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c447796d-48ac-4eeb-8fe6-ad411966b3d3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c447796d-48ac-4eeb-8fe6-ad411966b3d3" (UID: "c447796d-48ac-4eeb-8fe6-ad411966b3d3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.627710 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2lq4t" event={"ID":"c447796d-48ac-4eeb-8fe6-ad411966b3d3","Type":"ContainerDied","Data":"5c1f5c13def0721993c42fbb7e9330a705cffc8e6326a288871d364ef1275f63"} Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.627795 4751 scope.go:117] "RemoveContainer" containerID="cd00a1023a77b74abf41fc259fe9b2a475f7ada25dc1b39aa83f13c57794ccfd" Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.627796 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2lq4t" Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.647933 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.648643 4751 scope.go:117] "RemoveContainer" containerID="6ee8fb8b12b6ee8cd20623ca96e7a87fc43d879e6d76a01bc3e55e235825e807" Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.655364 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c447796d-48ac-4eeb-8fe6-ad411966b3d3-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.655404 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2g96\" (UniqueName: \"kubernetes.io/projected/c447796d-48ac-4eeb-8fe6-ad411966b3d3-kube-api-access-n2g96\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.655423 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c447796d-48ac-4eeb-8fe6-ad411966b3d3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.655451 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-2lq4t"] Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.663158 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-2lq4t"] Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.687907 4751 scope.go:117] "RemoveContainer" containerID="f55678880104a29f2f67c32892dfe2939404ec7dce246a6e2dd6c365f96de5ab" Jan 31 14:45:19 crc kubenswrapper[4751]: I0131 14:45:19.694387 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:45:20 crc kubenswrapper[4751]: I0131 14:45:20.416995 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c447796d-48ac-4eeb-8fe6-ad411966b3d3" path="/var/lib/kubelet/pods/c447796d-48ac-4eeb-8fe6-ad411966b3d3/volumes" Jan 31 14:45:20 crc kubenswrapper[4751]: I0131 14:45:20.779759 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfjx5"] Jan 31 14:45:20 crc kubenswrapper[4751]: I0131 14:45:20.780168 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-nfjx5" podUID="e771b68a-beea-4c8b-a085-b869155ca20d" containerName="registry-server" containerID="cri-o://3b08935560f35380fa53730c9bdd64653cc7a118a7c4b8bd45e5eeddbd415e24" gracePeriod=2 Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.224594 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.278658 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e771b68a-beea-4c8b-a085-b869155ca20d-utilities\") pod \"e771b68a-beea-4c8b-a085-b869155ca20d\" (UID: \"e771b68a-beea-4c8b-a085-b869155ca20d\") " Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.278712 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e771b68a-beea-4c8b-a085-b869155ca20d-catalog-content\") pod \"e771b68a-beea-4c8b-a085-b869155ca20d\" (UID: \"e771b68a-beea-4c8b-a085-b869155ca20d\") " Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.278761 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4nf7x\" (UniqueName: \"kubernetes.io/projected/e771b68a-beea-4c8b-a085-b869155ca20d-kube-api-access-4nf7x\") pod \"e771b68a-beea-4c8b-a085-b869155ca20d\" (UID: \"e771b68a-beea-4c8b-a085-b869155ca20d\") " Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.279493 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e771b68a-beea-4c8b-a085-b869155ca20d-utilities" (OuterVolumeSpecName: "utilities") pod "e771b68a-beea-4c8b-a085-b869155ca20d" (UID: "e771b68a-beea-4c8b-a085-b869155ca20d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.282623 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e771b68a-beea-4c8b-a085-b869155ca20d-kube-api-access-4nf7x" (OuterVolumeSpecName: "kube-api-access-4nf7x") pod "e771b68a-beea-4c8b-a085-b869155ca20d" (UID: "e771b68a-beea-4c8b-a085-b869155ca20d"). InnerVolumeSpecName "kube-api-access-4nf7x". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.313209 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e771b68a-beea-4c8b-a085-b869155ca20d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e771b68a-beea-4c8b-a085-b869155ca20d" (UID: "e771b68a-beea-4c8b-a085-b869155ca20d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.379763 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4nf7x\" (UniqueName: \"kubernetes.io/projected/e771b68a-beea-4c8b-a085-b869155ca20d-kube-api-access-4nf7x\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.379797 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e771b68a-beea-4c8b-a085-b869155ca20d-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.379806 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e771b68a-beea-4c8b-a085-b869155ca20d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.641817 4751 generic.go:334] "Generic (PLEG): container finished" podID="e771b68a-beea-4c8b-a085-b869155ca20d" containerID="3b08935560f35380fa53730c9bdd64653cc7a118a7c4b8bd45e5eeddbd415e24" exitCode=0 Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.641898 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfjx5" event={"ID":"e771b68a-beea-4c8b-a085-b869155ca20d","Type":"ContainerDied","Data":"3b08935560f35380fa53730c9bdd64653cc7a118a7c4b8bd45e5eeddbd415e24"} Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.641952 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-nfjx5" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.641990 4751 scope.go:117] "RemoveContainer" containerID="3b08935560f35380fa53730c9bdd64653cc7a118a7c4b8bd45e5eeddbd415e24" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.641971 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-nfjx5" event={"ID":"e771b68a-beea-4c8b-a085-b869155ca20d","Type":"ContainerDied","Data":"17e2b2135e55e973ccc015ba33cfd9e0c7a1763d73b3153f649e1c6747bac744"} Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.660365 4751 scope.go:117] "RemoveContainer" containerID="449419de3999925adbffe064ed4c6d253fd8062b2c5e50eef6d0e389bc5f1a74" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.670328 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfjx5"] Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.673218 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-nfjx5"] Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.700844 4751 scope.go:117] "RemoveContainer" containerID="cc1400d076f7032bfa7b9349903c39c2a8d9d2e65e96a7551c8c78a1f7255455" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.716226 4751 scope.go:117] "RemoveContainer" containerID="3b08935560f35380fa53730c9bdd64653cc7a118a7c4b8bd45e5eeddbd415e24" Jan 31 14:45:21 crc kubenswrapper[4751]: E0131 14:45:21.716642 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3b08935560f35380fa53730c9bdd64653cc7a118a7c4b8bd45e5eeddbd415e24\": container with ID starting with 3b08935560f35380fa53730c9bdd64653cc7a118a7c4b8bd45e5eeddbd415e24 not found: ID does not exist" containerID="3b08935560f35380fa53730c9bdd64653cc7a118a7c4b8bd45e5eeddbd415e24" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.716747 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3b08935560f35380fa53730c9bdd64653cc7a118a7c4b8bd45e5eeddbd415e24"} err="failed to get container status \"3b08935560f35380fa53730c9bdd64653cc7a118a7c4b8bd45e5eeddbd415e24\": rpc error: code = NotFound desc = could not find container \"3b08935560f35380fa53730c9bdd64653cc7a118a7c4b8bd45e5eeddbd415e24\": container with ID starting with 3b08935560f35380fa53730c9bdd64653cc7a118a7c4b8bd45e5eeddbd415e24 not found: ID does not exist" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.716849 4751 scope.go:117] "RemoveContainer" containerID="449419de3999925adbffe064ed4c6d253fd8062b2c5e50eef6d0e389bc5f1a74" Jan 31 14:45:21 crc kubenswrapper[4751]: E0131 14:45:21.717444 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"449419de3999925adbffe064ed4c6d253fd8062b2c5e50eef6d0e389bc5f1a74\": container with ID starting with 449419de3999925adbffe064ed4c6d253fd8062b2c5e50eef6d0e389bc5f1a74 not found: ID does not exist" containerID="449419de3999925adbffe064ed4c6d253fd8062b2c5e50eef6d0e389bc5f1a74" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.717481 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"449419de3999925adbffe064ed4c6d253fd8062b2c5e50eef6d0e389bc5f1a74"} err="failed to get container status \"449419de3999925adbffe064ed4c6d253fd8062b2c5e50eef6d0e389bc5f1a74\": rpc error: code = NotFound desc = could not find container \"449419de3999925adbffe064ed4c6d253fd8062b2c5e50eef6d0e389bc5f1a74\": container with ID starting with 449419de3999925adbffe064ed4c6d253fd8062b2c5e50eef6d0e389bc5f1a74 not found: ID does not exist" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.717506 4751 scope.go:117] "RemoveContainer" containerID="cc1400d076f7032bfa7b9349903c39c2a8d9d2e65e96a7551c8c78a1f7255455" Jan 31 14:45:21 crc kubenswrapper[4751]: E0131 14:45:21.717790 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cc1400d076f7032bfa7b9349903c39c2a8d9d2e65e96a7551c8c78a1f7255455\": container with ID starting with cc1400d076f7032bfa7b9349903c39c2a8d9d2e65e96a7551c8c78a1f7255455 not found: ID does not exist" containerID="cc1400d076f7032bfa7b9349903c39c2a8d9d2e65e96a7551c8c78a1f7255455" Jan 31 14:45:21 crc kubenswrapper[4751]: I0131 14:45:21.717891 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cc1400d076f7032bfa7b9349903c39c2a8d9d2e65e96a7551c8c78a1f7255455"} err="failed to get container status \"cc1400d076f7032bfa7b9349903c39c2a8d9d2e65e96a7551c8c78a1f7255455\": rpc error: code = NotFound desc = could not find container \"cc1400d076f7032bfa7b9349903c39c2a8d9d2e65e96a7551c8c78a1f7255455\": container with ID starting with cc1400d076f7032bfa7b9349903c39c2a8d9d2e65e96a7551c8c78a1f7255455 not found: ID does not exist" Jan 31 14:45:22 crc kubenswrapper[4751]: I0131 14:45:22.413559 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e771b68a-beea-4c8b-a085-b869155ca20d" path="/var/lib/kubelet/pods/e771b68a-beea-4c8b-a085-b869155ca20d/volumes" Jan 31 14:45:23 crc kubenswrapper[4751]: I0131 14:45:23.886859 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc"] Jan 31 14:45:23 crc kubenswrapper[4751]: I0131 14:45:23.887128 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" podUID="7689427f-2c92-4b56-9617-1139504142ee" containerName="controller-manager" containerID="cri-o://5ab0b26976de57e2a1cac017e1118160bad1efe8234f246fb4610cad0bf8fe55" gracePeriod=30 Jan 31 14:45:23 crc kubenswrapper[4751]: I0131 14:45:23.985995 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f"] Jan 31 14:45:23 crc kubenswrapper[4751]: I0131 14:45:23.986775 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" podUID="78cdf25f-daec-4cd7-8954-1fef6f3727db" containerName="route-controller-manager" containerID="cri-o://d6fcf4545fc743b58a8d591ee8c96761c9cf12c0b3c6f1393a495a38267ab81b" gracePeriod=30 Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.389568 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.482981 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.523592 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78cdf25f-daec-4cd7-8954-1fef6f3727db-client-ca\") pod \"78cdf25f-daec-4cd7-8954-1fef6f3727db\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.523646 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78cdf25f-daec-4cd7-8954-1fef6f3727db-serving-cert\") pod \"78cdf25f-daec-4cd7-8954-1fef6f3727db\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.523697 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6tv6b\" (UniqueName: \"kubernetes.io/projected/78cdf25f-daec-4cd7-8954-1fef6f3727db-kube-api-access-6tv6b\") pod \"78cdf25f-daec-4cd7-8954-1fef6f3727db\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.523735 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78cdf25f-daec-4cd7-8954-1fef6f3727db-config\") pod \"78cdf25f-daec-4cd7-8954-1fef6f3727db\" (UID: \"78cdf25f-daec-4cd7-8954-1fef6f3727db\") " Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.524659 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78cdf25f-daec-4cd7-8954-1fef6f3727db-client-ca" (OuterVolumeSpecName: "client-ca") pod "78cdf25f-daec-4cd7-8954-1fef6f3727db" (UID: "78cdf25f-daec-4cd7-8954-1fef6f3727db"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.524692 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/78cdf25f-daec-4cd7-8954-1fef6f3727db-config" (OuterVolumeSpecName: "config") pod "78cdf25f-daec-4cd7-8954-1fef6f3727db" (UID: "78cdf25f-daec-4cd7-8954-1fef6f3727db"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.529060 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/78cdf25f-daec-4cd7-8954-1fef6f3727db-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "78cdf25f-daec-4cd7-8954-1fef6f3727db" (UID: "78cdf25f-daec-4cd7-8954-1fef6f3727db"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.529308 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/78cdf25f-daec-4cd7-8954-1fef6f3727db-kube-api-access-6tv6b" (OuterVolumeSpecName: "kube-api-access-6tv6b") pod "78cdf25f-daec-4cd7-8954-1fef6f3727db" (UID: "78cdf25f-daec-4cd7-8954-1fef6f3727db"). InnerVolumeSpecName "kube-api-access-6tv6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.624760 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-client-ca\") pod \"7689427f-2c92-4b56-9617-1139504142ee\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.624822 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-config\") pod \"7689427f-2c92-4b56-9617-1139504142ee\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.624844 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7689427f-2c92-4b56-9617-1139504142ee-serving-cert\") pod \"7689427f-2c92-4b56-9617-1139504142ee\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.624913 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-proxy-ca-bundles\") pod \"7689427f-2c92-4b56-9617-1139504142ee\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.624933 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-krs4f\" (UniqueName: \"kubernetes.io/projected/7689427f-2c92-4b56-9617-1139504142ee-kube-api-access-krs4f\") pod \"7689427f-2c92-4b56-9617-1139504142ee\" (UID: \"7689427f-2c92-4b56-9617-1139504142ee\") " Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.625141 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6tv6b\" (UniqueName: \"kubernetes.io/projected/78cdf25f-daec-4cd7-8954-1fef6f3727db-kube-api-access-6tv6b\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.625152 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/78cdf25f-daec-4cd7-8954-1fef6f3727db-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.625163 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/78cdf25f-daec-4cd7-8954-1fef6f3727db-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.625172 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/78cdf25f-daec-4cd7-8954-1fef6f3727db-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.626057 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-client-ca" (OuterVolumeSpecName: "client-ca") pod "7689427f-2c92-4b56-9617-1139504142ee" (UID: "7689427f-2c92-4b56-9617-1139504142ee"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.626083 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7689427f-2c92-4b56-9617-1139504142ee" (UID: "7689427f-2c92-4b56-9617-1139504142ee"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.626263 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-config" (OuterVolumeSpecName: "config") pod "7689427f-2c92-4b56-9617-1139504142ee" (UID: "7689427f-2c92-4b56-9617-1139504142ee"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.630216 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7689427f-2c92-4b56-9617-1139504142ee-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7689427f-2c92-4b56-9617-1139504142ee" (UID: "7689427f-2c92-4b56-9617-1139504142ee"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.631271 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7689427f-2c92-4b56-9617-1139504142ee-kube-api-access-krs4f" (OuterVolumeSpecName: "kube-api-access-krs4f") pod "7689427f-2c92-4b56-9617-1139504142ee" (UID: "7689427f-2c92-4b56-9617-1139504142ee"). InnerVolumeSpecName "kube-api-access-krs4f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.656534 4751 generic.go:334] "Generic (PLEG): container finished" podID="7689427f-2c92-4b56-9617-1139504142ee" containerID="5ab0b26976de57e2a1cac017e1118160bad1efe8234f246fb4610cad0bf8fe55" exitCode=0 Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.656626 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.656603 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" event={"ID":"7689427f-2c92-4b56-9617-1139504142ee","Type":"ContainerDied","Data":"5ab0b26976de57e2a1cac017e1118160bad1efe8234f246fb4610cad0bf8fe55"} Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.656753 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc" event={"ID":"7689427f-2c92-4b56-9617-1139504142ee","Type":"ContainerDied","Data":"53e8f013a379679cef6168fde6d18706b1593479aa88c48f6b833cf1be744d64"} Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.656778 4751 scope.go:117] "RemoveContainer" containerID="5ab0b26976de57e2a1cac017e1118160bad1efe8234f246fb4610cad0bf8fe55" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.657790 4751 generic.go:334] "Generic (PLEG): container finished" podID="78cdf25f-daec-4cd7-8954-1fef6f3727db" containerID="d6fcf4545fc743b58a8d591ee8c96761c9cf12c0b3c6f1393a495a38267ab81b" exitCode=0 Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.657813 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" event={"ID":"78cdf25f-daec-4cd7-8954-1fef6f3727db","Type":"ContainerDied","Data":"d6fcf4545fc743b58a8d591ee8c96761c9cf12c0b3c6f1393a495a38267ab81b"} Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.657826 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" event={"ID":"78cdf25f-daec-4cd7-8954-1fef6f3727db","Type":"ContainerDied","Data":"970e5859fee2ce045474246c69d85b582bd206939166d1d5d70f07913b640d31"} Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.657868 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.675222 4751 scope.go:117] "RemoveContainer" containerID="5ab0b26976de57e2a1cac017e1118160bad1efe8234f246fb4610cad0bf8fe55" Jan 31 14:45:24 crc kubenswrapper[4751]: E0131 14:45:24.675524 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5ab0b26976de57e2a1cac017e1118160bad1efe8234f246fb4610cad0bf8fe55\": container with ID starting with 5ab0b26976de57e2a1cac017e1118160bad1efe8234f246fb4610cad0bf8fe55 not found: ID does not exist" containerID="5ab0b26976de57e2a1cac017e1118160bad1efe8234f246fb4610cad0bf8fe55" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.675552 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5ab0b26976de57e2a1cac017e1118160bad1efe8234f246fb4610cad0bf8fe55"} err="failed to get container status \"5ab0b26976de57e2a1cac017e1118160bad1efe8234f246fb4610cad0bf8fe55\": rpc error: code = NotFound desc = could not find container \"5ab0b26976de57e2a1cac017e1118160bad1efe8234f246fb4610cad0bf8fe55\": container with ID starting with 5ab0b26976de57e2a1cac017e1118160bad1efe8234f246fb4610cad0bf8fe55 not found: ID does not exist" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.675573 4751 scope.go:117] "RemoveContainer" containerID="d6fcf4545fc743b58a8d591ee8c96761c9cf12c0b3c6f1393a495a38267ab81b" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.684088 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f"] Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.687522 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6fb78c7854-7b78f"] Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.688492 4751 scope.go:117] "RemoveContainer" containerID="d6fcf4545fc743b58a8d591ee8c96761c9cf12c0b3c6f1393a495a38267ab81b" Jan 31 14:45:24 crc kubenswrapper[4751]: E0131 14:45:24.688966 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d6fcf4545fc743b58a8d591ee8c96761c9cf12c0b3c6f1393a495a38267ab81b\": container with ID starting with d6fcf4545fc743b58a8d591ee8c96761c9cf12c0b3c6f1393a495a38267ab81b not found: ID does not exist" containerID="d6fcf4545fc743b58a8d591ee8c96761c9cf12c0b3c6f1393a495a38267ab81b" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.689001 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d6fcf4545fc743b58a8d591ee8c96761c9cf12c0b3c6f1393a495a38267ab81b"} err="failed to get container status \"d6fcf4545fc743b58a8d591ee8c96761c9cf12c0b3c6f1393a495a38267ab81b\": rpc error: code = NotFound desc = could not find container \"d6fcf4545fc743b58a8d591ee8c96761c9cf12c0b3c6f1393a495a38267ab81b\": container with ID starting with d6fcf4545fc743b58a8d591ee8c96761c9cf12c0b3c6f1393a495a38267ab81b not found: ID does not exist" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.700346 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc"] Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.703212 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-9ff5cfb77-zl2hc"] Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.726715 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.726755 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-krs4f\" (UniqueName: \"kubernetes.io/projected/7689427f-2c92-4b56-9617-1139504142ee-kube-api-access-krs4f\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.726768 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.726777 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7689427f-2c92-4b56-9617-1139504142ee-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:24 crc kubenswrapper[4751]: I0131 14:45:24.726786 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7689427f-2c92-4b56-9617-1139504142ee-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136398 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z"] Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136636 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e771b68a-beea-4c8b-a085-b869155ca20d" containerName="registry-server" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136652 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e771b68a-beea-4c8b-a085-b869155ca20d" containerName="registry-server" Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136663 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="78cdf25f-daec-4cd7-8954-1fef6f3727db" containerName="route-controller-manager" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136672 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="78cdf25f-daec-4cd7-8954-1fef6f3727db" containerName="route-controller-manager" Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136683 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c0f5c8-cecf-451f-abef-bf357716eb71" containerName="extract-content" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136691 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c0f5c8-cecf-451f-abef-bf357716eb71" containerName="extract-content" Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136704 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c0f5c8-cecf-451f-abef-bf357716eb71" containerName="extract-utilities" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136711 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c0f5c8-cecf-451f-abef-bf357716eb71" containerName="extract-utilities" Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136724 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e771b68a-beea-4c8b-a085-b869155ca20d" containerName="extract-content" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136732 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e771b68a-beea-4c8b-a085-b869155ca20d" containerName="extract-content" Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136756 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4ee27ad5-3acb-4388-a964-3b526b79e776" containerName="pruner" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136764 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ee27ad5-3acb-4388-a964-3b526b79e776" containerName="pruner" Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136775 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c447796d-48ac-4eeb-8fe6-ad411966b3d3" containerName="registry-server" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136782 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c447796d-48ac-4eeb-8fe6-ad411966b3d3" containerName="registry-server" Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136791 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f614f9ab-b5e2-4548-93e7-571d1ffb57b0" containerName="registry-server" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136799 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f614f9ab-b5e2-4548-93e7-571d1ffb57b0" containerName="registry-server" Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136807 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a7534cc-afa8-4cd1-acb0-e4269e55316b" containerName="collect-profiles" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136815 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a7534cc-afa8-4cd1-acb0-e4269e55316b" containerName="collect-profiles" Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136823 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f614f9ab-b5e2-4548-93e7-571d1ffb57b0" containerName="extract-content" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136830 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f614f9ab-b5e2-4548-93e7-571d1ffb57b0" containerName="extract-content" Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136844 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5c0f5c8-cecf-451f-abef-bf357716eb71" containerName="registry-server" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136852 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5c0f5c8-cecf-451f-abef-bf357716eb71" containerName="registry-server" Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136866 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f614f9ab-b5e2-4548-93e7-571d1ffb57b0" containerName="extract-utilities" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136874 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f614f9ab-b5e2-4548-93e7-571d1ffb57b0" containerName="extract-utilities" Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136882 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e771b68a-beea-4c8b-a085-b869155ca20d" containerName="extract-utilities" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136889 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e771b68a-beea-4c8b-a085-b869155ca20d" containerName="extract-utilities" Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136901 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7689427f-2c92-4b56-9617-1139504142ee" containerName="controller-manager" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136908 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7689427f-2c92-4b56-9617-1139504142ee" containerName="controller-manager" Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136936 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c447796d-48ac-4eeb-8fe6-ad411966b3d3" containerName="extract-utilities" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136945 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c447796d-48ac-4eeb-8fe6-ad411966b3d3" containerName="extract-utilities" Jan 31 14:45:25 crc kubenswrapper[4751]: E0131 14:45:25.136953 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c447796d-48ac-4eeb-8fe6-ad411966b3d3" containerName="extract-content" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.136960 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c447796d-48ac-4eeb-8fe6-ad411966b3d3" containerName="extract-content" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.137093 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5c0f5c8-cecf-451f-abef-bf357716eb71" containerName="registry-server" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.137111 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f614f9ab-b5e2-4548-93e7-571d1ffb57b0" containerName="registry-server" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.137121 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a7534cc-afa8-4cd1-acb0-e4269e55316b" containerName="collect-profiles" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.137133 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e771b68a-beea-4c8b-a085-b869155ca20d" containerName="registry-server" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.137145 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c447796d-48ac-4eeb-8fe6-ad411966b3d3" containerName="registry-server" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.137156 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4ee27ad5-3acb-4388-a964-3b526b79e776" containerName="pruner" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.137165 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="78cdf25f-daec-4cd7-8954-1fef6f3727db" containerName="route-controller-manager" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.137176 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7689427f-2c92-4b56-9617-1139504142ee" containerName="controller-manager" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.137592 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.140329 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk"] Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.140377 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.140444 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.140538 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.140984 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.141003 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.141153 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.143874 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.144219 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.144596 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.145181 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.145359 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.146170 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.149213 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.151723 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.156162 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk"] Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.163910 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z"] Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.234703 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-config\") pod \"controller-manager-7df5c4f8d-6z7qk\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.235157 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-client-ca\") pod \"controller-manager-7df5c4f8d-6z7qk\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.235366 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51033f6-0061-4b08-9d82-11c610c7d396-config\") pod \"route-controller-manager-77b9dd5444-kpj8z\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.235559 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-proxy-ca-bundles\") pod \"controller-manager-7df5c4f8d-6z7qk\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.235737 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e51033f6-0061-4b08-9d82-11c610c7d396-client-ca\") pod \"route-controller-manager-77b9dd5444-kpj8z\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.235914 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t85s\" (UniqueName: \"kubernetes.io/projected/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-kube-api-access-9t85s\") pod \"controller-manager-7df5c4f8d-6z7qk\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.236136 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e51033f6-0061-4b08-9d82-11c610c7d396-serving-cert\") pod \"route-controller-manager-77b9dd5444-kpj8z\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.236300 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-serving-cert\") pod \"controller-manager-7df5c4f8d-6z7qk\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.236472 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mqnt2\" (UniqueName: \"kubernetes.io/projected/e51033f6-0061-4b08-9d82-11c610c7d396-kube-api-access-mqnt2\") pod \"route-controller-manager-77b9dd5444-kpj8z\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.337430 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e51033f6-0061-4b08-9d82-11c610c7d396-serving-cert\") pod \"route-controller-manager-77b9dd5444-kpj8z\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.337473 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-serving-cert\") pod \"controller-manager-7df5c4f8d-6z7qk\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.337494 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mqnt2\" (UniqueName: \"kubernetes.io/projected/e51033f6-0061-4b08-9d82-11c610c7d396-kube-api-access-mqnt2\") pod \"route-controller-manager-77b9dd5444-kpj8z\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.337528 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-client-ca\") pod \"controller-manager-7df5c4f8d-6z7qk\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.337546 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-config\") pod \"controller-manager-7df5c4f8d-6z7qk\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.337565 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51033f6-0061-4b08-9d82-11c610c7d396-config\") pod \"route-controller-manager-77b9dd5444-kpj8z\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.337594 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-proxy-ca-bundles\") pod \"controller-manager-7df5c4f8d-6z7qk\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.337618 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e51033f6-0061-4b08-9d82-11c610c7d396-client-ca\") pod \"route-controller-manager-77b9dd5444-kpj8z\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.337635 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t85s\" (UniqueName: \"kubernetes.io/projected/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-kube-api-access-9t85s\") pod \"controller-manager-7df5c4f8d-6z7qk\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.338996 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-proxy-ca-bundles\") pod \"controller-manager-7df5c4f8d-6z7qk\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.339118 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-client-ca\") pod \"controller-manager-7df5c4f8d-6z7qk\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.339405 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51033f6-0061-4b08-9d82-11c610c7d396-config\") pod \"route-controller-manager-77b9dd5444-kpj8z\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.339658 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-config\") pod \"controller-manager-7df5c4f8d-6z7qk\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.339946 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e51033f6-0061-4b08-9d82-11c610c7d396-client-ca\") pod \"route-controller-manager-77b9dd5444-kpj8z\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.342471 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e51033f6-0061-4b08-9d82-11c610c7d396-serving-cert\") pod \"route-controller-manager-77b9dd5444-kpj8z\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.351212 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-serving-cert\") pod \"controller-manager-7df5c4f8d-6z7qk\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.353557 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mqnt2\" (UniqueName: \"kubernetes.io/projected/e51033f6-0061-4b08-9d82-11c610c7d396-kube-api-access-mqnt2\") pod \"route-controller-manager-77b9dd5444-kpj8z\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.364746 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t85s\" (UniqueName: \"kubernetes.io/projected/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-kube-api-access-9t85s\") pod \"controller-manager-7df5c4f8d-6z7qk\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.457673 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.470664 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.773100 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z"] Jan 31 14:45:25 crc kubenswrapper[4751]: W0131 14:45:25.773577 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode51033f6_0061_4b08_9d82_11c610c7d396.slice/crio-701b25b8e3fbaec6025474cc0863bcdd565567a075d8cac932f6692b1bdc32fa WatchSource:0}: Error finding container 701b25b8e3fbaec6025474cc0863bcdd565567a075d8cac932f6692b1bdc32fa: Status 404 returned error can't find the container with id 701b25b8e3fbaec6025474cc0863bcdd565567a075d8cac932f6692b1bdc32fa Jan 31 14:45:25 crc kubenswrapper[4751]: I0131 14:45:25.903976 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk"] Jan 31 14:45:25 crc kubenswrapper[4751]: W0131 14:45:25.915422 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10e50c97_9956_48fc_a759_6d6a2e2d8ca5.slice/crio-f5eab53aa57319123515f3ce0a1dc6f4bc60f14152bae2520bba9ec245f1d592 WatchSource:0}: Error finding container f5eab53aa57319123515f3ce0a1dc6f4bc60f14152bae2520bba9ec245f1d592: Status 404 returned error can't find the container with id f5eab53aa57319123515f3ce0a1dc6f4bc60f14152bae2520bba9ec245f1d592 Jan 31 14:45:26 crc kubenswrapper[4751]: I0131 14:45:26.411508 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7689427f-2c92-4b56-9617-1139504142ee" path="/var/lib/kubelet/pods/7689427f-2c92-4b56-9617-1139504142ee/volumes" Jan 31 14:45:26 crc kubenswrapper[4751]: I0131 14:45:26.412544 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="78cdf25f-daec-4cd7-8954-1fef6f3727db" path="/var/lib/kubelet/pods/78cdf25f-daec-4cd7-8954-1fef6f3727db/volumes" Jan 31 14:45:26 crc kubenswrapper[4751]: I0131 14:45:26.672297 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" event={"ID":"e51033f6-0061-4b08-9d82-11c610c7d396","Type":"ContainerStarted","Data":"d3805c33079a37613edf0ea51929b4cc19479078ccdac739485e2af4ae10c78a"} Jan 31 14:45:26 crc kubenswrapper[4751]: I0131 14:45:26.672554 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:26 crc kubenswrapper[4751]: I0131 14:45:26.672626 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" event={"ID":"e51033f6-0061-4b08-9d82-11c610c7d396","Type":"ContainerStarted","Data":"701b25b8e3fbaec6025474cc0863bcdd565567a075d8cac932f6692b1bdc32fa"} Jan 31 14:45:26 crc kubenswrapper[4751]: I0131 14:45:26.674684 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" event={"ID":"10e50c97-9956-48fc-a759-6d6a2e2d8ca5","Type":"ContainerStarted","Data":"ddc997e5bd0f4e42afe9a829495321c7b00002150e75b55e2c4d433cd4092402"} Jan 31 14:45:26 crc kubenswrapper[4751]: I0131 14:45:26.674724 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" event={"ID":"10e50c97-9956-48fc-a759-6d6a2e2d8ca5","Type":"ContainerStarted","Data":"f5eab53aa57319123515f3ce0a1dc6f4bc60f14152bae2520bba9ec245f1d592"} Jan 31 14:45:26 crc kubenswrapper[4751]: I0131 14:45:26.675037 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:26 crc kubenswrapper[4751]: I0131 14:45:26.676619 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:45:26 crc kubenswrapper[4751]: I0131 14:45:26.678685 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:45:26 crc kubenswrapper[4751]: I0131 14:45:26.688210 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" podStartSLOduration=2.688193596 podStartE2EDuration="2.688193596s" podCreationTimestamp="2026-01-31 14:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:45:26.687272842 +0000 UTC m=+231.061985727" watchObservedRunningTime="2026-01-31 14:45:26.688193596 +0000 UTC m=+231.062906491" Jan 31 14:45:26 crc kubenswrapper[4751]: I0131 14:45:26.728090 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" podStartSLOduration=3.728050414 podStartE2EDuration="3.728050414s" podCreationTimestamp="2026-01-31 14:45:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:45:26.726472592 +0000 UTC m=+231.101185487" watchObservedRunningTime="2026-01-31 14:45:26.728050414 +0000 UTC m=+231.102763309" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.259378 4751 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.260238 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff" gracePeriod=15 Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.260299 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3" gracePeriod=15 Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.260296 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19" gracePeriod=15 Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.260294 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea" gracePeriod=15 Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.260434 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218" gracePeriod=15 Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.261528 4751 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 14:45:32 crc kubenswrapper[4751]: E0131 14:45:32.261838 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.261869 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 14:45:32 crc kubenswrapper[4751]: E0131 14:45:32.261897 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.261909 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 31 14:45:32 crc kubenswrapper[4751]: E0131 14:45:32.261927 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.261940 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 14:45:32 crc kubenswrapper[4751]: E0131 14:45:32.261958 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.261971 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 14:45:32 crc kubenswrapper[4751]: E0131 14:45:32.261987 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.261999 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 14:45:32 crc kubenswrapper[4751]: E0131 14:45:32.262013 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.262025 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 14:45:32 crc kubenswrapper[4751]: E0131 14:45:32.262040 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.262051 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.262247 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.262272 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.262288 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.262308 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.262322 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.262341 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.264691 4751 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.265886 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.271728 4751 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="f4b27818a5e8e43d0dc095d08835c792" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.336184 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.336313 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.336362 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.336425 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.336454 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.336491 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.336532 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.336588 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: E0131 14:45:32.338827 4751 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.437884 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.437983 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.437985 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.438044 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.438058 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.438104 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.438359 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.438570 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.438652 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.438663 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.438490 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.438708 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.439363 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.439435 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.439551 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.439579 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.640292 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:32 crc kubenswrapper[4751]: W0131 14:45:32.690008 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-ded9b48a582c8efc38308d8ddc31d8083958671210498b68ef4025e597af42e4 WatchSource:0}: Error finding container ded9b48a582c8efc38308d8ddc31d8083958671210498b68ef4025e597af42e4: Status 404 returned error can't find the container with id ded9b48a582c8efc38308d8ddc31d8083958671210498b68ef4025e597af42e4 Jan 31 14:45:32 crc kubenswrapper[4751]: E0131 14:45:32.693331 4751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fd80de0800fbf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 14:45:32.692557759 +0000 UTC m=+237.067270684,LastTimestamp:2026-01-31 14:45:32.692557759 +0000 UTC m=+237.067270684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.720707 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.722714 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.724170 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19" exitCode=0 Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.724213 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3" exitCode=0 Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.724228 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea" exitCode=0 Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.724246 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218" exitCode=2 Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.724328 4751 scope.go:117] "RemoveContainer" containerID="16f93de7a002a1e6a281d90042a003acf677ad136b29da511ead9acd31eb7ab2" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.726297 4751 generic.go:334] "Generic (PLEG): container finished" podID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" containerID="8b4d59b21d9818b51f757f56dda578d9b5e64551b0acae90d2098c728b3290ee" exitCode=0 Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.726385 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ccfa0c88-7f51-4d85-8a49-e05865c6a06e","Type":"ContainerDied","Data":"8b4d59b21d9818b51f757f56dda578d9b5e64551b0acae90d2098c728b3290ee"} Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.727373 4751 status_manager.go:851] "Failed to get status for pod" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:32 crc kubenswrapper[4751]: I0131 14:45:32.727793 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"ded9b48a582c8efc38308d8ddc31d8083958671210498b68ef4025e597af42e4"} Jan 31 14:45:33 crc kubenswrapper[4751]: I0131 14:45:33.737028 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"be1bca22b91e771b11166bb91585a254e34658c1ab13b852f1301a3b8029237f"} Jan 31 14:45:33 crc kubenswrapper[4751]: I0131 14:45:33.738052 4751 status_manager.go:851] "Failed to get status for pod" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:33 crc kubenswrapper[4751]: E0131 14:45:33.738137 4751 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:33 crc kubenswrapper[4751]: I0131 14:45:33.742311 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.191834 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.193238 4751 status_manager.go:851] "Failed to get status for pod" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.268677 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-kubelet-dir\") pod \"ccfa0c88-7f51-4d85-8a49-e05865c6a06e\" (UID: \"ccfa0c88-7f51-4d85-8a49-e05865c6a06e\") " Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.268862 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "ccfa0c88-7f51-4d85-8a49-e05865c6a06e" (UID: "ccfa0c88-7f51-4d85-8a49-e05865c6a06e"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.268882 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-var-lock\") pod \"ccfa0c88-7f51-4d85-8a49-e05865c6a06e\" (UID: \"ccfa0c88-7f51-4d85-8a49-e05865c6a06e\") " Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.268946 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-var-lock" (OuterVolumeSpecName: "var-lock") pod "ccfa0c88-7f51-4d85-8a49-e05865c6a06e" (UID: "ccfa0c88-7f51-4d85-8a49-e05865c6a06e"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.269018 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-kube-api-access\") pod \"ccfa0c88-7f51-4d85-8a49-e05865c6a06e\" (UID: \"ccfa0c88-7f51-4d85-8a49-e05865c6a06e\") " Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.269582 4751 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.269615 4751 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.275973 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "ccfa0c88-7f51-4d85-8a49-e05865c6a06e" (UID: "ccfa0c88-7f51-4d85-8a49-e05865c6a06e"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.392030 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/ccfa0c88-7f51-4d85-8a49-e05865c6a06e-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.656916 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.658518 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.659444 4751 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.660203 4751 status_manager.go:851] "Failed to get status for pod" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.753669 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.756138 4751 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff" exitCode=0 Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.756270 4751 scope.go:117] "RemoveContainer" containerID="8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.756485 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.761178 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.762020 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"ccfa0c88-7f51-4d85-8a49-e05865c6a06e","Type":"ContainerDied","Data":"ca8ebba9df4a8c9712a669a8d97759aea5c95bd694f2cead6b4521af30eb8469"} Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.762140 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ca8ebba9df4a8c9712a669a8d97759aea5c95bd694f2cead6b4521af30eb8469" Jan 31 14:45:34 crc kubenswrapper[4751]: E0131 14:45:34.762305 4751 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.797912 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.798050 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.798042 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.798142 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.798227 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.798340 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.798686 4751 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.798724 4751 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.798780 4751 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.798927 4751 status_manager.go:851] "Failed to get status for pod" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.799415 4751 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.811343 4751 scope.go:117] "RemoveContainer" containerID="7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.830361 4751 scope.go:117] "RemoveContainer" containerID="08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.848744 4751 scope.go:117] "RemoveContainer" containerID="ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.869889 4751 scope.go:117] "RemoveContainer" containerID="39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.892220 4751 scope.go:117] "RemoveContainer" containerID="92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.930100 4751 scope.go:117] "RemoveContainer" containerID="8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19" Jan 31 14:45:34 crc kubenswrapper[4751]: E0131 14:45:34.930969 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\": container with ID starting with 8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19 not found: ID does not exist" containerID="8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.931063 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19"} err="failed to get container status \"8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\": rpc error: code = NotFound desc = could not find container \"8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19\": container with ID starting with 8df6c6abfaa0e16e8c6de87da9d3584b57a9e83e9dee03f98bb9de38cf71cc19 not found: ID does not exist" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.931143 4751 scope.go:117] "RemoveContainer" containerID="7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3" Jan 31 14:45:34 crc kubenswrapper[4751]: E0131 14:45:34.931726 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\": container with ID starting with 7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3 not found: ID does not exist" containerID="7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.931764 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3"} err="failed to get container status \"7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\": rpc error: code = NotFound desc = could not find container \"7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3\": container with ID starting with 7d34543b9a432a0dd029c18b90dd065749cf5a74b9221dcedf22a810afa749a3 not found: ID does not exist" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.931792 4751 scope.go:117] "RemoveContainer" containerID="08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea" Jan 31 14:45:34 crc kubenswrapper[4751]: E0131 14:45:34.932258 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\": container with ID starting with 08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea not found: ID does not exist" containerID="08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.932291 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea"} err="failed to get container status \"08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\": rpc error: code = NotFound desc = could not find container \"08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea\": container with ID starting with 08603a998264e94ef7871338a47019047495685fb867c5fa4f290f8a990dc6ea not found: ID does not exist" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.932312 4751 scope.go:117] "RemoveContainer" containerID="ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218" Jan 31 14:45:34 crc kubenswrapper[4751]: E0131 14:45:34.933054 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\": container with ID starting with ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218 not found: ID does not exist" containerID="ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.933130 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218"} err="failed to get container status \"ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\": rpc error: code = NotFound desc = could not find container \"ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218\": container with ID starting with ddd61fd469b2710e86edf5644cff4c4272ce73a59b6578c4cf941c5408e2d218 not found: ID does not exist" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.933161 4751 scope.go:117] "RemoveContainer" containerID="39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff" Jan 31 14:45:34 crc kubenswrapper[4751]: E0131 14:45:34.933883 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\": container with ID starting with 39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff not found: ID does not exist" containerID="39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.933905 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff"} err="failed to get container status \"39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\": rpc error: code = NotFound desc = could not find container \"39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff\": container with ID starting with 39e4fe03f50f95359d284f7d5aebb0df697fba39743cf83857c7f17893cb2bff not found: ID does not exist" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.933922 4751 scope.go:117] "RemoveContainer" containerID="92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254" Jan 31 14:45:34 crc kubenswrapper[4751]: E0131 14:45:34.934617 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\": container with ID starting with 92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254 not found: ID does not exist" containerID="92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254" Jan 31 14:45:34 crc kubenswrapper[4751]: I0131 14:45:34.934701 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254"} err="failed to get container status \"92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\": rpc error: code = NotFound desc = could not find container \"92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254\": container with ID starting with 92ff81193764a63b8b46092711612e005ad922936fea86d92b4ef07a16f2a254 not found: ID does not exist" Jan 31 14:45:35 crc kubenswrapper[4751]: I0131 14:45:35.082846 4751 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:35 crc kubenswrapper[4751]: I0131 14:45:35.083411 4751 status_manager.go:851] "Failed to get status for pod" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:35 crc kubenswrapper[4751]: I0131 14:45:35.486743 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" podUID="802d5225-ef3f-485c-bb85-3c0f18e42952" containerName="oauth-openshift" containerID="cri-o://01af9b04a121e47de6d720ef96908370b377b2bf6ed16ab772bd8cea30c24502" gracePeriod=15 Jan 31 14:45:35 crc kubenswrapper[4751]: I0131 14:45:35.772091 4751 generic.go:334] "Generic (PLEG): container finished" podID="802d5225-ef3f-485c-bb85-3c0f18e42952" containerID="01af9b04a121e47de6d720ef96908370b377b2bf6ed16ab772bd8cea30c24502" exitCode=0 Jan 31 14:45:35 crc kubenswrapper[4751]: I0131 14:45:35.772186 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" event={"ID":"802d5225-ef3f-485c-bb85-3c0f18e42952","Type":"ContainerDied","Data":"01af9b04a121e47de6d720ef96908370b377b2bf6ed16ab772bd8cea30c24502"} Jan 31 14:45:35 crc kubenswrapper[4751]: I0131 14:45:35.979016 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:45:35 crc kubenswrapper[4751]: I0131 14:45:35.979782 4751 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:35 crc kubenswrapper[4751]: I0131 14:45:35.980007 4751 status_manager.go:851] "Failed to get status for pod" podUID="802d5225-ef3f-485c-bb85-3c0f18e42952" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-xr2gt\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:35 crc kubenswrapper[4751]: I0131 14:45:35.980232 4751 status_manager.go:851] "Failed to get status for pod" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.121964 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-audit-policies\") pod \"802d5225-ef3f-485c-bb85-3c0f18e42952\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.122152 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-session\") pod \"802d5225-ef3f-485c-bb85-3c0f18e42952\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.122253 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-trusted-ca-bundle\") pod \"802d5225-ef3f-485c-bb85-3c0f18e42952\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.122321 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-serving-cert\") pod \"802d5225-ef3f-485c-bb85-3c0f18e42952\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.122370 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnk4h\" (UniqueName: \"kubernetes.io/projected/802d5225-ef3f-485c-bb85-3c0f18e42952-kube-api-access-rnk4h\") pod \"802d5225-ef3f-485c-bb85-3c0f18e42952\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.122422 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-login\") pod \"802d5225-ef3f-485c-bb85-3c0f18e42952\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.122475 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-cliconfig\") pod \"802d5225-ef3f-485c-bb85-3c0f18e42952\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.122583 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-ocp-branding-template\") pod \"802d5225-ef3f-485c-bb85-3c0f18e42952\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.122629 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-idp-0-file-data\") pod \"802d5225-ef3f-485c-bb85-3c0f18e42952\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.122677 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/802d5225-ef3f-485c-bb85-3c0f18e42952-audit-dir\") pod \"802d5225-ef3f-485c-bb85-3c0f18e42952\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.122772 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-error\") pod \"802d5225-ef3f-485c-bb85-3c0f18e42952\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.122821 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-provider-selection\") pod \"802d5225-ef3f-485c-bb85-3c0f18e42952\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.122891 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-service-ca\") pod \"802d5225-ef3f-485c-bb85-3c0f18e42952\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.122945 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-router-certs\") pod \"802d5225-ef3f-485c-bb85-3c0f18e42952\" (UID: \"802d5225-ef3f-485c-bb85-3c0f18e42952\") " Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.123636 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "802d5225-ef3f-485c-bb85-3c0f18e42952" (UID: "802d5225-ef3f-485c-bb85-3c0f18e42952"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.123900 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "802d5225-ef3f-485c-bb85-3c0f18e42952" (UID: "802d5225-ef3f-485c-bb85-3c0f18e42952"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.124140 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/802d5225-ef3f-485c-bb85-3c0f18e42952-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "802d5225-ef3f-485c-bb85-3c0f18e42952" (UID: "802d5225-ef3f-485c-bb85-3c0f18e42952"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.124832 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "802d5225-ef3f-485c-bb85-3c0f18e42952" (UID: "802d5225-ef3f-485c-bb85-3c0f18e42952"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.125528 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "802d5225-ef3f-485c-bb85-3c0f18e42952" (UID: "802d5225-ef3f-485c-bb85-3c0f18e42952"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.130032 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "802d5225-ef3f-485c-bb85-3c0f18e42952" (UID: "802d5225-ef3f-485c-bb85-3c0f18e42952"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.130243 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "802d5225-ef3f-485c-bb85-3c0f18e42952" (UID: "802d5225-ef3f-485c-bb85-3c0f18e42952"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.130502 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "802d5225-ef3f-485c-bb85-3c0f18e42952" (UID: "802d5225-ef3f-485c-bb85-3c0f18e42952"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.136937 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/802d5225-ef3f-485c-bb85-3c0f18e42952-kube-api-access-rnk4h" (OuterVolumeSpecName: "kube-api-access-rnk4h") pod "802d5225-ef3f-485c-bb85-3c0f18e42952" (UID: "802d5225-ef3f-485c-bb85-3c0f18e42952"). InnerVolumeSpecName "kube-api-access-rnk4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.137349 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "802d5225-ef3f-485c-bb85-3c0f18e42952" (UID: "802d5225-ef3f-485c-bb85-3c0f18e42952"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.137784 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "802d5225-ef3f-485c-bb85-3c0f18e42952" (UID: "802d5225-ef3f-485c-bb85-3c0f18e42952"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.138339 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "802d5225-ef3f-485c-bb85-3c0f18e42952" (UID: "802d5225-ef3f-485c-bb85-3c0f18e42952"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.139258 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "802d5225-ef3f-485c-bb85-3c0f18e42952" (UID: "802d5225-ef3f-485c-bb85-3c0f18e42952"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.139802 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "802d5225-ef3f-485c-bb85-3c0f18e42952" (UID: "802d5225-ef3f-485c-bb85-3c0f18e42952"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.224898 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.224964 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.224987 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.225008 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnk4h\" (UniqueName: \"kubernetes.io/projected/802d5225-ef3f-485c-bb85-3c0f18e42952-kube-api-access-rnk4h\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.225029 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.225049 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.225190 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.225212 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.225233 4751 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/802d5225-ef3f-485c-bb85-3c0f18e42952-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.225252 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.225274 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.225294 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.225313 4751 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/802d5225-ef3f-485c-bb85-3c0f18e42952-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.225333 4751 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/802d5225-ef3f-485c-bb85-3c0f18e42952-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.410514 4751 status_manager.go:851] "Failed to get status for pod" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.411282 4751 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.411966 4751 status_manager.go:851] "Failed to get status for pod" podUID="802d5225-ef3f-485c-bb85-3c0f18e42952" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-xr2gt\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.418589 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 31 14:45:36 crc kubenswrapper[4751]: E0131 14:45:36.494521 4751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fd80de0800fbf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 14:45:32.692557759 +0000 UTC m=+237.067270684,LastTimestamp:2026-01-31 14:45:32.692557759 +0000 UTC m=+237.067270684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.793363 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.793372 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" event={"ID":"802d5225-ef3f-485c-bb85-3c0f18e42952","Type":"ContainerDied","Data":"5084329a5b9f4efb799b2485cd137ef3c2a4c4cd5ed6e746dd1d5ef125ea23bd"} Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.793692 4751 scope.go:117] "RemoveContainer" containerID="01af9b04a121e47de6d720ef96908370b377b2bf6ed16ab772bd8cea30c24502" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.797108 4751 status_manager.go:851] "Failed to get status for pod" podUID="802d5225-ef3f-485c-bb85-3c0f18e42952" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-xr2gt\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.800014 4751 status_manager.go:851] "Failed to get status for pod" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.800752 4751 status_manager.go:851] "Failed to get status for pod" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:36 crc kubenswrapper[4751]: I0131 14:45:36.801294 4751 status_manager.go:851] "Failed to get status for pod" podUID="802d5225-ef3f-485c-bb85-3c0f18e42952" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-xr2gt\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:37 crc kubenswrapper[4751]: E0131 14:45:37.256353 4751 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:37 crc kubenswrapper[4751]: E0131 14:45:37.256954 4751 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:37 crc kubenswrapper[4751]: E0131 14:45:37.257334 4751 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:37 crc kubenswrapper[4751]: E0131 14:45:37.257728 4751 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:37 crc kubenswrapper[4751]: E0131 14:45:37.257988 4751 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:37 crc kubenswrapper[4751]: I0131 14:45:37.258015 4751 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 31 14:45:37 crc kubenswrapper[4751]: E0131 14:45:37.258255 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="200ms" Jan 31 14:45:37 crc kubenswrapper[4751]: E0131 14:45:37.459226 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="400ms" Jan 31 14:45:37 crc kubenswrapper[4751]: E0131 14:45:37.860199 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="800ms" Jan 31 14:45:38 crc kubenswrapper[4751]: E0131 14:45:38.661798 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="1.6s" Jan 31 14:45:40 crc kubenswrapper[4751]: E0131 14:45:40.263374 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="3.2s" Jan 31 14:45:43 crc kubenswrapper[4751]: E0131 14:45:43.465147 4751 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.98:6443: connect: connection refused" interval="6.4s" Jan 31 14:45:44 crc kubenswrapper[4751]: E0131 14:45:44.484177 4751 desired_state_of_world_populator.go:312] "Error processing volume" err="error processing PVC openshift-image-registry/crc-image-registry-storage: failed to fetch PVC from API server: Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-image-registry/persistentvolumeclaims/crc-image-registry-storage\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" volumeName="registry-storage" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.405794 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.412428 4751 status_manager.go:851] "Failed to get status for pod" podUID="802d5225-ef3f-485c-bb85-3c0f18e42952" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-xr2gt\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.413103 4751 status_manager.go:851] "Failed to get status for pod" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.413940 4751 status_manager.go:851] "Failed to get status for pod" podUID="802d5225-ef3f-485c-bb85-3c0f18e42952" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-xr2gt\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.414667 4751 status_manager.go:851] "Failed to get status for pod" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.425615 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6afb84a0-c564-45aa-b7a1-cd6f8273fe45" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.425649 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6afb84a0-c564-45aa-b7a1-cd6f8273fe45" Jan 31 14:45:46 crc kubenswrapper[4751]: E0131 14:45:46.426123 4751 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.426754 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:46 crc kubenswrapper[4751]: W0131 14:45:46.451307 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-404b70ebaf5fed25a5454946db611d40733044239df31a2f8f6dda576f28c2c0 WatchSource:0}: Error finding container 404b70ebaf5fed25a5454946db611d40733044239df31a2f8f6dda576f28c2c0: Status 404 returned error can't find the container with id 404b70ebaf5fed25a5454946db611d40733044239df31a2f8f6dda576f28c2c0 Jan 31 14:45:46 crc kubenswrapper[4751]: E0131 14:45:46.495299 4751 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.98:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188fd80de0800fbf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-31 14:45:32.692557759 +0000 UTC m=+237.067270684,LastTimestamp:2026-01-31 14:45:32.692557759 +0000 UTC m=+237.067270684,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.886119 4751 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="742a7adfa45d74153bac627996d99b69392b41fb162b6188adf8e23c123aad69" exitCode=0 Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.886226 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"742a7adfa45d74153bac627996d99b69392b41fb162b6188adf8e23c123aad69"} Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.886624 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"404b70ebaf5fed25a5454946db611d40733044239df31a2f8f6dda576f28c2c0"} Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.887126 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6afb84a0-c564-45aa-b7a1-cd6f8273fe45" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.887165 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6afb84a0-c564-45aa-b7a1-cd6f8273fe45" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.887604 4751 status_manager.go:851] "Failed to get status for pod" podUID="802d5225-ef3f-485c-bb85-3c0f18e42952" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-xr2gt\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:46 crc kubenswrapper[4751]: E0131 14:45:46.887790 4751 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.888164 4751 status_manager.go:851] "Failed to get status for pod" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.891690 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.891759 4751 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853" exitCode=1 Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.891797 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853"} Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.892486 4751 scope.go:117] "RemoveContainer" containerID="b37355e2b9d95a941be3552b713f7fe96973a49528f7a979524779b263412853" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.892774 4751 status_manager.go:851] "Failed to get status for pod" podUID="802d5225-ef3f-485c-bb85-3c0f18e42952" pod="openshift-authentication/oauth-openshift-558db77b4-xr2gt" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-xr2gt\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.893380 4751 status_manager.go:851] "Failed to get status for pod" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:46 crc kubenswrapper[4751]: I0131 14:45:46.893990 4751 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.98:6443: connect: connection refused" Jan 31 14:45:47 crc kubenswrapper[4751]: I0131 14:45:47.903620 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"64a8038adbf46c1a0accb259e789e83fc67321dda6a8a3aa486ff19d330d054d"} Jan 31 14:45:47 crc kubenswrapper[4751]: I0131 14:45:47.903669 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"4fc4b7e8589c2df8783c4da81ac9dbb4fdd25aa1d0b5de7ee5f479299d107d91"} Jan 31 14:45:47 crc kubenswrapper[4751]: I0131 14:45:47.903683 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"39599b3b855998562e1f7c861dd5691648eedf04ff2a6db2d75224a54c464df7"} Jan 31 14:45:47 crc kubenswrapper[4751]: I0131 14:45:47.907323 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 31 14:45:47 crc kubenswrapper[4751]: I0131 14:45:47.907372 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"f457cbdc5e9ea9752a74adf3088e3884f6d9789fd54e67d4c3ec0ff19f6d5401"} Jan 31 14:45:48 crc kubenswrapper[4751]: I0131 14:45:48.695603 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:45:48 crc kubenswrapper[4751]: I0131 14:45:48.926826 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"7e06d2b77d840d75967ee6907153f96eebedeff3900fb9de287bb4c4ff7b817b"} Jan 31 14:45:48 crc kubenswrapper[4751]: I0131 14:45:48.926870 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"307f74fe108cfba0f6cd3ab198e901fea1902ccc42924a9b7a66076b3b0e53a2"} Jan 31 14:45:48 crc kubenswrapper[4751]: I0131 14:45:48.927239 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6afb84a0-c564-45aa-b7a1-cd6f8273fe45" Jan 31 14:45:48 crc kubenswrapper[4751]: I0131 14:45:48.927254 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6afb84a0-c564-45aa-b7a1-cd6f8273fe45" Jan 31 14:45:49 crc kubenswrapper[4751]: I0131 14:45:49.777593 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:45:49 crc kubenswrapper[4751]: I0131 14:45:49.789670 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:45:51 crc kubenswrapper[4751]: I0131 14:45:51.427660 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:51 crc kubenswrapper[4751]: I0131 14:45:51.428025 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:51 crc kubenswrapper[4751]: I0131 14:45:51.435720 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:54 crc kubenswrapper[4751]: I0131 14:45:54.049199 4751 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:55 crc kubenswrapper[4751]: I0131 14:45:55.001799 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:55 crc kubenswrapper[4751]: I0131 14:45:55.001894 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6afb84a0-c564-45aa-b7a1-cd6f8273fe45" Jan 31 14:45:55 crc kubenswrapper[4751]: I0131 14:45:55.002049 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6afb84a0-c564-45aa-b7a1-cd6f8273fe45" Jan 31 14:45:55 crc kubenswrapper[4751]: I0131 14:45:55.007029 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:45:56 crc kubenswrapper[4751]: I0131 14:45:56.008691 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6afb84a0-c564-45aa-b7a1-cd6f8273fe45" Jan 31 14:45:56 crc kubenswrapper[4751]: I0131 14:45:56.008819 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6afb84a0-c564-45aa-b7a1-cd6f8273fe45" Jan 31 14:45:56 crc kubenswrapper[4751]: I0131 14:45:56.418374 4751 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e25c6070-8daf-4743-8483-2439c48514be" Jan 31 14:45:57 crc kubenswrapper[4751]: I0131 14:45:57.016250 4751 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6afb84a0-c564-45aa-b7a1-cd6f8273fe45" Jan 31 14:45:57 crc kubenswrapper[4751]: I0131 14:45:57.016296 4751 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="6afb84a0-c564-45aa-b7a1-cd6f8273fe45" Jan 31 14:45:57 crc kubenswrapper[4751]: I0131 14:45:57.019746 4751 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="e25c6070-8daf-4743-8483-2439c48514be" Jan 31 14:45:58 crc kubenswrapper[4751]: I0131 14:45:58.700104 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 31 14:46:03 crc kubenswrapper[4751]: I0131 14:46:03.671939 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 31 14:46:03 crc kubenswrapper[4751]: I0131 14:46:03.796945 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 31 14:46:03 crc kubenswrapper[4751]: I0131 14:46:03.823745 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 31 14:46:03 crc kubenswrapper[4751]: I0131 14:46:03.881478 4751 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 31 14:46:03 crc kubenswrapper[4751]: I0131 14:46:03.889547 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-558db77b4-xr2gt"] Jan 31 14:46:03 crc kubenswrapper[4751]: I0131 14:46:03.889654 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 31 14:46:03 crc kubenswrapper[4751]: I0131 14:46:03.896296 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 31 14:46:03 crc kubenswrapper[4751]: I0131 14:46:03.925566 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=9.925540265 podStartE2EDuration="9.925540265s" podCreationTimestamp="2026-01-31 14:45:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:46:03.916006294 +0000 UTC m=+268.290719219" watchObservedRunningTime="2026-01-31 14:46:03.925540265 +0000 UTC m=+268.300253190" Jan 31 14:46:04 crc kubenswrapper[4751]: I0131 14:46:04.165931 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 31 14:46:04 crc kubenswrapper[4751]: I0131 14:46:04.417231 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="802d5225-ef3f-485c-bb85-3c0f18e42952" path="/var/lib/kubelet/pods/802d5225-ef3f-485c-bb85-3c0f18e42952/volumes" Jan 31 14:46:04 crc kubenswrapper[4751]: I0131 14:46:04.630696 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 31 14:46:04 crc kubenswrapper[4751]: I0131 14:46:04.974119 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 31 14:46:05 crc kubenswrapper[4751]: I0131 14:46:05.192421 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 31 14:46:05 crc kubenswrapper[4751]: I0131 14:46:05.234227 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 31 14:46:05 crc kubenswrapper[4751]: I0131 14:46:05.482102 4751 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 31 14:46:05 crc kubenswrapper[4751]: I0131 14:46:05.482321 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://be1bca22b91e771b11166bb91585a254e34658c1ab13b852f1301a3b8029237f" gracePeriod=5 Jan 31 14:46:05 crc kubenswrapper[4751]: I0131 14:46:05.615248 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 31 14:46:05 crc kubenswrapper[4751]: I0131 14:46:05.735553 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 31 14:46:05 crc kubenswrapper[4751]: I0131 14:46:05.820268 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 31 14:46:05 crc kubenswrapper[4751]: I0131 14:46:05.860730 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 31 14:46:06 crc kubenswrapper[4751]: I0131 14:46:06.075008 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 31 14:46:06 crc kubenswrapper[4751]: I0131 14:46:06.088943 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 31 14:46:06 crc kubenswrapper[4751]: I0131 14:46:06.125298 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 31 14:46:06 crc kubenswrapper[4751]: I0131 14:46:06.205565 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 31 14:46:06 crc kubenswrapper[4751]: I0131 14:46:06.245464 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 31 14:46:06 crc kubenswrapper[4751]: I0131 14:46:06.392702 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 31 14:46:06 crc kubenswrapper[4751]: I0131 14:46:06.675219 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 31 14:46:06 crc kubenswrapper[4751]: I0131 14:46:06.946628 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 31 14:46:06 crc kubenswrapper[4751]: I0131 14:46:06.968627 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 31 14:46:07 crc kubenswrapper[4751]: I0131 14:46:07.050695 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 31 14:46:07 crc kubenswrapper[4751]: I0131 14:46:07.065173 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 31 14:46:07 crc kubenswrapper[4751]: I0131 14:46:07.091271 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 31 14:46:07 crc kubenswrapper[4751]: I0131 14:46:07.196130 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 31 14:46:07 crc kubenswrapper[4751]: I0131 14:46:07.208262 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 31 14:46:07 crc kubenswrapper[4751]: I0131 14:46:07.399706 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 31 14:46:07 crc kubenswrapper[4751]: I0131 14:46:07.414400 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 31 14:46:07 crc kubenswrapper[4751]: I0131 14:46:07.430305 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 31 14:46:07 crc kubenswrapper[4751]: I0131 14:46:07.549561 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 31 14:46:07 crc kubenswrapper[4751]: I0131 14:46:07.557843 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 31 14:46:07 crc kubenswrapper[4751]: I0131 14:46:07.716993 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 31 14:46:07 crc kubenswrapper[4751]: I0131 14:46:07.805203 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 31 14:46:07 crc kubenswrapper[4751]: I0131 14:46:07.929753 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 31 14:46:07 crc kubenswrapper[4751]: I0131 14:46:07.975783 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 31 14:46:07 crc kubenswrapper[4751]: I0131 14:46:07.995480 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 31 14:46:08 crc kubenswrapper[4751]: I0131 14:46:08.202158 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 31 14:46:08 crc kubenswrapper[4751]: I0131 14:46:08.216863 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 31 14:46:08 crc kubenswrapper[4751]: I0131 14:46:08.270764 4751 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 31 14:46:08 crc kubenswrapper[4751]: I0131 14:46:08.389193 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 31 14:46:08 crc kubenswrapper[4751]: I0131 14:46:08.409634 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 31 14:46:08 crc kubenswrapper[4751]: I0131 14:46:08.530630 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 31 14:46:08 crc kubenswrapper[4751]: I0131 14:46:08.533322 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 31 14:46:08 crc kubenswrapper[4751]: I0131 14:46:08.616786 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 31 14:46:08 crc kubenswrapper[4751]: I0131 14:46:08.710427 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 31 14:46:08 crc kubenswrapper[4751]: I0131 14:46:08.813652 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 31 14:46:08 crc kubenswrapper[4751]: I0131 14:46:08.842061 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 31 14:46:08 crc kubenswrapper[4751]: I0131 14:46:08.914880 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 31 14:46:08 crc kubenswrapper[4751]: I0131 14:46:08.965130 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 31 14:46:08 crc kubenswrapper[4751]: I0131 14:46:08.969311 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 31 14:46:09 crc kubenswrapper[4751]: I0131 14:46:09.039185 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 31 14:46:09 crc kubenswrapper[4751]: I0131 14:46:09.183104 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 31 14:46:09 crc kubenswrapper[4751]: I0131 14:46:09.374183 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 31 14:46:09 crc kubenswrapper[4751]: I0131 14:46:09.426933 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 31 14:46:09 crc kubenswrapper[4751]: I0131 14:46:09.429735 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 31 14:46:09 crc kubenswrapper[4751]: I0131 14:46:09.461810 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 31 14:46:09 crc kubenswrapper[4751]: I0131 14:46:09.516294 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 31 14:46:09 crc kubenswrapper[4751]: I0131 14:46:09.524012 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 31 14:46:09 crc kubenswrapper[4751]: I0131 14:46:09.630578 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 31 14:46:09 crc kubenswrapper[4751]: I0131 14:46:09.859163 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 31 14:46:09 crc kubenswrapper[4751]: I0131 14:46:09.877060 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 31 14:46:09 crc kubenswrapper[4751]: I0131 14:46:09.883717 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.016519 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.176313 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.201590 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.259745 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.321139 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.326514 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.343823 4751 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.346564 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.380713 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.482389 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.534519 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.546267 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.622818 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.624824 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.640675 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.679542 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.847560 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.935953 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.955921 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.961317 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 31 14:46:10 crc kubenswrapper[4751]: I0131 14:46:10.996371 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.032755 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.106989 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.107032 4751 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="be1bca22b91e771b11166bb91585a254e34658c1ab13b852f1301a3b8029237f" exitCode=137 Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.132229 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.179638 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.189955 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.213366 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.213438 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.219569 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.225847 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.235567 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.338728 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.338825 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.338893 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.338919 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.338973 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.339030 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.339121 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.339171 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.339253 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.339410 4751 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.339442 4751 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.339467 4751 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.339490 4751 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.342506 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.355909 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.441113 4751 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.467582 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.502664 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.614862 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.623848 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.725248 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.847175 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.878233 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.923600 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.953527 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 31 14:46:11 crc kubenswrapper[4751]: I0131 14:46:11.970969 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.047255 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.077238 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.115554 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.115626 4751 scope.go:117] "RemoveContainer" containerID="be1bca22b91e771b11166bb91585a254e34658c1ab13b852f1301a3b8029237f" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.115710 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.120175 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.253918 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.276994 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.282145 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.374381 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.399879 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.415332 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.416396 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.416933 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.528876 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.589030 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.629869 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.753793 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.802340 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.832026 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.838234 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 31 14:46:12 crc kubenswrapper[4751]: I0131 14:46:12.847639 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 31 14:46:13 crc kubenswrapper[4751]: I0131 14:46:13.121416 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 31 14:46:13 crc kubenswrapper[4751]: I0131 14:46:13.171595 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 31 14:46:13 crc kubenswrapper[4751]: I0131 14:46:13.332036 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 31 14:46:13 crc kubenswrapper[4751]: I0131 14:46:13.456275 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 31 14:46:13 crc kubenswrapper[4751]: I0131 14:46:13.470198 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 31 14:46:13 crc kubenswrapper[4751]: I0131 14:46:13.501428 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 31 14:46:13 crc kubenswrapper[4751]: I0131 14:46:13.503144 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 31 14:46:13 crc kubenswrapper[4751]: I0131 14:46:13.570644 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 31 14:46:13 crc kubenswrapper[4751]: I0131 14:46:13.678756 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 31 14:46:13 crc kubenswrapper[4751]: I0131 14:46:13.787257 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 31 14:46:13 crc kubenswrapper[4751]: I0131 14:46:13.877211 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 31 14:46:13 crc kubenswrapper[4751]: I0131 14:46:13.947593 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 31 14:46:13 crc kubenswrapper[4751]: I0131 14:46:13.992372 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 31 14:46:14 crc kubenswrapper[4751]: I0131 14:46:14.033894 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 31 14:46:14 crc kubenswrapper[4751]: I0131 14:46:14.116229 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 31 14:46:14 crc kubenswrapper[4751]: I0131 14:46:14.122893 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 31 14:46:14 crc kubenswrapper[4751]: I0131 14:46:14.127814 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 31 14:46:14 crc kubenswrapper[4751]: I0131 14:46:14.182528 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 31 14:46:14 crc kubenswrapper[4751]: I0131 14:46:14.217206 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 31 14:46:14 crc kubenswrapper[4751]: I0131 14:46:14.563325 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 31 14:46:14 crc kubenswrapper[4751]: I0131 14:46:14.615013 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 31 14:46:14 crc kubenswrapper[4751]: I0131 14:46:14.730901 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 31 14:46:14 crc kubenswrapper[4751]: I0131 14:46:14.766194 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 31 14:46:14 crc kubenswrapper[4751]: I0131 14:46:14.797606 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 31 14:46:14 crc kubenswrapper[4751]: I0131 14:46:14.930251 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 31 14:46:14 crc kubenswrapper[4751]: I0131 14:46:14.970674 4751 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 31 14:46:14 crc kubenswrapper[4751]: I0131 14:46:14.995244 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.096943 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.097016 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.141147 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.169226 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.225130 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.320115 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.601766 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.611456 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.614039 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.643539 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.646628 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.719006 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.720531 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.751696 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.754620 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.756391 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.770274 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.851846 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 31 14:46:15 crc kubenswrapper[4751]: I0131 14:46:15.997635 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.007472 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.038332 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.090654 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.095676 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.146749 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.168385 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.202218 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.206995 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.217670 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.220981 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.231880 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.249741 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.309175 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.436670 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.521712 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.581209 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.584997 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.615561 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.674716 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.696591 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.900521 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.916472 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 31 14:46:16 crc kubenswrapper[4751]: I0131 14:46:16.984663 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.042179 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.190221 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.199141 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.212455 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.241880 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.509874 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.545098 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.603571 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.654244 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.664647 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.677745 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.685361 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.689215 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.790867 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.796491 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.797411 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.804622 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 31 14:46:17 crc kubenswrapper[4751]: I0131 14:46:17.884734 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.000363 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.053607 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.070426 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.121469 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.257112 4751 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.260683 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.396027 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.420925 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.465150 4751 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.470410 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.549187 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.567342 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.636813 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.646900 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.691899 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.773216 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.813050 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.837932 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.849687 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 31 14:46:18 crc kubenswrapper[4751]: I0131 14:46:18.852000 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 31 14:46:19 crc kubenswrapper[4751]: I0131 14:46:19.085698 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 14:46:19 crc kubenswrapper[4751]: I0131 14:46:19.132946 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 31 14:46:19 crc kubenswrapper[4751]: I0131 14:46:19.333936 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 14:46:19 crc kubenswrapper[4751]: I0131 14:46:19.724171 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 31 14:46:19 crc kubenswrapper[4751]: I0131 14:46:19.772811 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 31 14:46:19 crc kubenswrapper[4751]: I0131 14:46:19.933249 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.041970 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-f699678c6-qbqdh"] Jan 31 14:46:20 crc kubenswrapper[4751]: E0131 14:46:20.042385 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.042397 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 14:46:20 crc kubenswrapper[4751]: E0131 14:46:20.042413 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" containerName="installer" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.042419 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" containerName="installer" Jan 31 14:46:20 crc kubenswrapper[4751]: E0131 14:46:20.042428 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="802d5225-ef3f-485c-bb85-3c0f18e42952" containerName="oauth-openshift" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.042434 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="802d5225-ef3f-485c-bb85-3c0f18e42952" containerName="oauth-openshift" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.042512 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ccfa0c88-7f51-4d85-8a49-e05865c6a06e" containerName="installer" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.042527 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="802d5225-ef3f-485c-bb85-3c0f18e42952" containerName="oauth-openshift" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.042536 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.042876 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.049234 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.049507 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.049721 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.050024 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.050115 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.050260 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.050511 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.050688 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.051027 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.051142 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.051244 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.054345 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.060703 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f699678c6-qbqdh"] Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.063581 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c36bd7cf-5b67-414c-87f5-96de17336696-audit-policies\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.063647 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqpbg\" (UniqueName: \"kubernetes.io/projected/c36bd7cf-5b67-414c-87f5-96de17336696-kube-api-access-cqpbg\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.063694 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-router-certs\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.063751 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-user-template-login\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.063798 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.063831 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-session\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.063872 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c36bd7cf-5b67-414c-87f5-96de17336696-audit-dir\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.063909 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.063950 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-service-ca\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.063985 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-user-template-error\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.064017 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.064122 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.064165 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.064198 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.064763 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.070249 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.072566 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.076932 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.165903 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.165973 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.166036 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c36bd7cf-5b67-414c-87f5-96de17336696-audit-policies\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.166099 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqpbg\" (UniqueName: \"kubernetes.io/projected/c36bd7cf-5b67-414c-87f5-96de17336696-kube-api-access-cqpbg\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.166140 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-router-certs\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.166198 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-user-template-login\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.166248 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.166282 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-session\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.166322 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c36bd7cf-5b67-414c-87f5-96de17336696-audit-dir\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.166359 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.166399 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-service-ca\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.166437 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-user-template-error\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.166476 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.166524 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.166563 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/c36bd7cf-5b67-414c-87f5-96de17336696-audit-dir\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.167454 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.167754 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-cliconfig\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.168300 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-service-ca\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.168676 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/c36bd7cf-5b67-414c-87f5-96de17336696-audit-policies\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.172511 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.172680 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-router-certs\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.173499 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-session\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.175338 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-serving-cert\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.176136 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.176930 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-user-template-error\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.179875 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-user-template-login\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.182922 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/c36bd7cf-5b67-414c-87f5-96de17336696-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.186872 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqpbg\" (UniqueName: \"kubernetes.io/projected/c36bd7cf-5b67-414c-87f5-96de17336696-kube-api-access-cqpbg\") pod \"oauth-openshift-f699678c6-qbqdh\" (UID: \"c36bd7cf-5b67-414c-87f5-96de17336696\") " pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.265426 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.363094 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.370442 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:20 crc kubenswrapper[4751]: I0131 14:46:20.806045 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-f699678c6-qbqdh"] Jan 31 14:46:21 crc kubenswrapper[4751]: I0131 14:46:21.169866 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" event={"ID":"c36bd7cf-5b67-414c-87f5-96de17336696","Type":"ContainerStarted","Data":"2b5cff92e9c639be33386db99fdb0476e4a0f37da7395b60fcf06f1a4046a4e5"} Jan 31 14:46:21 crc kubenswrapper[4751]: I0131 14:46:21.169904 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" event={"ID":"c36bd7cf-5b67-414c-87f5-96de17336696","Type":"ContainerStarted","Data":"b5ca8568e3e0799b2108dad67eb1e21f53d70396acd240a9bf5acb4e22d83be5"} Jan 31 14:46:21 crc kubenswrapper[4751]: I0131 14:46:21.170132 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:21 crc kubenswrapper[4751]: I0131 14:46:21.196978 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" podStartSLOduration=71.196959662 podStartE2EDuration="1m11.196959662s" podCreationTimestamp="2026-01-31 14:45:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:46:21.19650433 +0000 UTC m=+285.571217215" watchObservedRunningTime="2026-01-31 14:46:21.196959662 +0000 UTC m=+285.571672557" Jan 31 14:46:21 crc kubenswrapper[4751]: I0131 14:46:21.410497 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-f699678c6-qbqdh" Jan 31 14:46:23 crc kubenswrapper[4751]: I0131 14:46:23.912897 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk"] Jan 31 14:46:23 crc kubenswrapper[4751]: I0131 14:46:23.913409 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" podUID="10e50c97-9956-48fc-a759-6d6a2e2d8ca5" containerName="controller-manager" containerID="cri-o://ddc997e5bd0f4e42afe9a829495321c7b00002150e75b55e2c4d433cd4092402" gracePeriod=30 Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.006736 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z"] Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.006971 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" podUID="e51033f6-0061-4b08-9d82-11c610c7d396" containerName="route-controller-manager" containerID="cri-o://d3805c33079a37613edf0ea51929b4cc19479078ccdac739485e2af4ae10c78a" gracePeriod=30 Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.193296 4751 generic.go:334] "Generic (PLEG): container finished" podID="10e50c97-9956-48fc-a759-6d6a2e2d8ca5" containerID="ddc997e5bd0f4e42afe9a829495321c7b00002150e75b55e2c4d433cd4092402" exitCode=0 Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.193401 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" event={"ID":"10e50c97-9956-48fc-a759-6d6a2e2d8ca5","Type":"ContainerDied","Data":"ddc997e5bd0f4e42afe9a829495321c7b00002150e75b55e2c4d433cd4092402"} Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.195802 4751 generic.go:334] "Generic (PLEG): container finished" podID="e51033f6-0061-4b08-9d82-11c610c7d396" containerID="d3805c33079a37613edf0ea51929b4cc19479078ccdac739485e2af4ae10c78a" exitCode=0 Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.195856 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" event={"ID":"e51033f6-0061-4b08-9d82-11c610c7d396","Type":"ContainerDied","Data":"d3805c33079a37613edf0ea51929b4cc19479078ccdac739485e2af4ae10c78a"} Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.461163 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.521344 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e51033f6-0061-4b08-9d82-11c610c7d396-serving-cert\") pod \"e51033f6-0061-4b08-9d82-11c610c7d396\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.521463 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51033f6-0061-4b08-9d82-11c610c7d396-config\") pod \"e51033f6-0061-4b08-9d82-11c610c7d396\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.522204 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mqnt2\" (UniqueName: \"kubernetes.io/projected/e51033f6-0061-4b08-9d82-11c610c7d396-kube-api-access-mqnt2\") pod \"e51033f6-0061-4b08-9d82-11c610c7d396\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.522250 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e51033f6-0061-4b08-9d82-11c610c7d396-client-ca\") pod \"e51033f6-0061-4b08-9d82-11c610c7d396\" (UID: \"e51033f6-0061-4b08-9d82-11c610c7d396\") " Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.522336 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51033f6-0061-4b08-9d82-11c610c7d396-config" (OuterVolumeSpecName: "config") pod "e51033f6-0061-4b08-9d82-11c610c7d396" (UID: "e51033f6-0061-4b08-9d82-11c610c7d396"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.522746 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e51033f6-0061-4b08-9d82-11c610c7d396-client-ca" (OuterVolumeSpecName: "client-ca") pod "e51033f6-0061-4b08-9d82-11c610c7d396" (UID: "e51033f6-0061-4b08-9d82-11c610c7d396"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.528269 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e51033f6-0061-4b08-9d82-11c610c7d396-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e51033f6-0061-4b08-9d82-11c610c7d396" (UID: "e51033f6-0061-4b08-9d82-11c610c7d396"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.530909 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e51033f6-0061-4b08-9d82-11c610c7d396-kube-api-access-mqnt2" (OuterVolumeSpecName: "kube-api-access-mqnt2") pod "e51033f6-0061-4b08-9d82-11c610c7d396" (UID: "e51033f6-0061-4b08-9d82-11c610c7d396"). InnerVolumeSpecName "kube-api-access-mqnt2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.624031 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e51033f6-0061-4b08-9d82-11c610c7d396-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.624082 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e51033f6-0061-4b08-9d82-11c610c7d396-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.624094 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mqnt2\" (UniqueName: \"kubernetes.io/projected/e51033f6-0061-4b08-9d82-11c610c7d396-kube-api-access-mqnt2\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.624107 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e51033f6-0061-4b08-9d82-11c610c7d396-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:24 crc kubenswrapper[4751]: I0131 14:46:24.928010 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.028597 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-proxy-ca-bundles\") pod \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.030027 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-client-ca\") pod \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.030140 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9t85s\" (UniqueName: \"kubernetes.io/projected/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-kube-api-access-9t85s\") pod \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.030198 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-config\") pod \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.030241 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-serving-cert\") pod \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\" (UID: \"10e50c97-9956-48fc-a759-6d6a2e2d8ca5\") " Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.029592 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "10e50c97-9956-48fc-a759-6d6a2e2d8ca5" (UID: "10e50c97-9956-48fc-a759-6d6a2e2d8ca5"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.030917 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-client-ca" (OuterVolumeSpecName: "client-ca") pod "10e50c97-9956-48fc-a759-6d6a2e2d8ca5" (UID: "10e50c97-9956-48fc-a759-6d6a2e2d8ca5"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.031049 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-config" (OuterVolumeSpecName: "config") pod "10e50c97-9956-48fc-a759-6d6a2e2d8ca5" (UID: "10e50c97-9956-48fc-a759-6d6a2e2d8ca5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.035443 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-kube-api-access-9t85s" (OuterVolumeSpecName: "kube-api-access-9t85s") pod "10e50c97-9956-48fc-a759-6d6a2e2d8ca5" (UID: "10e50c97-9956-48fc-a759-6d6a2e2d8ca5"). InnerVolumeSpecName "kube-api-access-9t85s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.037725 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "10e50c97-9956-48fc-a759-6d6a2e2d8ca5" (UID: "10e50c97-9956-48fc-a759-6d6a2e2d8ca5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.132470 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.132505 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.132517 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9t85s\" (UniqueName: \"kubernetes.io/projected/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-kube-api-access-9t85s\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.132531 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.132540 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/10e50c97-9956-48fc-a759-6d6a2e2d8ca5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.177585 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69f964bddc-kkkr7"] Jan 31 14:46:25 crc kubenswrapper[4751]: E0131 14:46:25.178050 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e51033f6-0061-4b08-9d82-11c610c7d396" containerName="route-controller-manager" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.178113 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e51033f6-0061-4b08-9d82-11c610c7d396" containerName="route-controller-manager" Jan 31 14:46:25 crc kubenswrapper[4751]: E0131 14:46:25.178155 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10e50c97-9956-48fc-a759-6d6a2e2d8ca5" containerName="controller-manager" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.178174 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="10e50c97-9956-48fc-a759-6d6a2e2d8ca5" containerName="controller-manager" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.178359 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e51033f6-0061-4b08-9d82-11c610c7d396" containerName="route-controller-manager" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.178392 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="10e50c97-9956-48fc-a759-6d6a2e2d8ca5" containerName="controller-manager" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.179252 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.184119 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx"] Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.185298 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.189713 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx"] Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.196590 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69f964bddc-kkkr7"] Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.210950 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" event={"ID":"10e50c97-9956-48fc-a759-6d6a2e2d8ca5","Type":"ContainerDied","Data":"f5eab53aa57319123515f3ce0a1dc6f4bc60f14152bae2520bba9ec245f1d592"} Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.211001 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.211038 4751 scope.go:117] "RemoveContainer" containerID="ddc997e5bd0f4e42afe9a829495321c7b00002150e75b55e2c4d433cd4092402" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.215335 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" event={"ID":"e51033f6-0061-4b08-9d82-11c610c7d396","Type":"ContainerDied","Data":"701b25b8e3fbaec6025474cc0863bcdd565567a075d8cac932f6692b1bdc32fa"} Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.215402 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.232374 4751 scope.go:117] "RemoveContainer" containerID="d3805c33079a37613edf0ea51929b4cc19479078ccdac739485e2af4ae10c78a" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.233291 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-proxy-ca-bundles\") pod \"controller-manager-69f964bddc-kkkr7\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.233342 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63f706f0-4681-4255-9479-fa83f336faf3-config\") pod \"route-controller-manager-786458fd97-jr5xx\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.233383 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqtdt\" (UniqueName: \"kubernetes.io/projected/63f706f0-4681-4255-9479-fa83f336faf3-kube-api-access-fqtdt\") pod \"route-controller-manager-786458fd97-jr5xx\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.233414 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-client-ca\") pod \"controller-manager-69f964bddc-kkkr7\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.233541 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63f706f0-4681-4255-9479-fa83f336faf3-client-ca\") pod \"route-controller-manager-786458fd97-jr5xx\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.233567 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-598sf\" (UniqueName: \"kubernetes.io/projected/4a69f6b0-3803-4184-bfd0-0fac841243c9-kube-api-access-598sf\") pod \"controller-manager-69f964bddc-kkkr7\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.233597 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-config\") pod \"controller-manager-69f964bddc-kkkr7\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.233613 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a69f6b0-3803-4184-bfd0-0fac841243c9-serving-cert\") pod \"controller-manager-69f964bddc-kkkr7\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.233648 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63f706f0-4681-4255-9479-fa83f336faf3-serving-cert\") pod \"route-controller-manager-786458fd97-jr5xx\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.253993 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk"] Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.258455 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7df5c4f8d-6z7qk"] Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.276767 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z"] Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.280742 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-77b9dd5444-kpj8z"] Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.334984 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63f706f0-4681-4255-9479-fa83f336faf3-serving-cert\") pod \"route-controller-manager-786458fd97-jr5xx\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.335052 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-proxy-ca-bundles\") pod \"controller-manager-69f964bddc-kkkr7\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.335098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63f706f0-4681-4255-9479-fa83f336faf3-config\") pod \"route-controller-manager-786458fd97-jr5xx\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.335143 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqtdt\" (UniqueName: \"kubernetes.io/projected/63f706f0-4681-4255-9479-fa83f336faf3-kube-api-access-fqtdt\") pod \"route-controller-manager-786458fd97-jr5xx\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.335183 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-client-ca\") pod \"controller-manager-69f964bddc-kkkr7\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.335206 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63f706f0-4681-4255-9479-fa83f336faf3-client-ca\") pod \"route-controller-manager-786458fd97-jr5xx\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.335227 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-598sf\" (UniqueName: \"kubernetes.io/projected/4a69f6b0-3803-4184-bfd0-0fac841243c9-kube-api-access-598sf\") pod \"controller-manager-69f964bddc-kkkr7\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.335262 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-config\") pod \"controller-manager-69f964bddc-kkkr7\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.335281 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a69f6b0-3803-4184-bfd0-0fac841243c9-serving-cert\") pod \"controller-manager-69f964bddc-kkkr7\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.336350 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-proxy-ca-bundles\") pod \"controller-manager-69f964bddc-kkkr7\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.336811 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-client-ca\") pod \"controller-manager-69f964bddc-kkkr7\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.337034 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63f706f0-4681-4255-9479-fa83f336faf3-client-ca\") pod \"route-controller-manager-786458fd97-jr5xx\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.338529 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a69f6b0-3803-4184-bfd0-0fac841243c9-serving-cert\") pod \"controller-manager-69f964bddc-kkkr7\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.338918 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63f706f0-4681-4255-9479-fa83f336faf3-serving-cert\") pod \"route-controller-manager-786458fd97-jr5xx\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.339239 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-config\") pod \"controller-manager-69f964bddc-kkkr7\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.350684 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63f706f0-4681-4255-9479-fa83f336faf3-config\") pod \"route-controller-manager-786458fd97-jr5xx\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.354752 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-598sf\" (UniqueName: \"kubernetes.io/projected/4a69f6b0-3803-4184-bfd0-0fac841243c9-kube-api-access-598sf\") pod \"controller-manager-69f964bddc-kkkr7\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.354949 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqtdt\" (UniqueName: \"kubernetes.io/projected/63f706f0-4681-4255-9479-fa83f336faf3-kube-api-access-fqtdt\") pod \"route-controller-manager-786458fd97-jr5xx\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.505752 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:25 crc kubenswrapper[4751]: I0131 14:46:25.528444 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:26 crc kubenswrapper[4751]: I0131 14:46:26.254671 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69f964bddc-kkkr7"] Jan 31 14:46:26 crc kubenswrapper[4751]: I0131 14:46:26.265768 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx"] Jan 31 14:46:26 crc kubenswrapper[4751]: W0131 14:46:26.282015 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63f706f0_4681_4255_9479_fa83f336faf3.slice/crio-f1b03d6b6b0f49ab50bcdb3528cc23a3e1574218292e25557a87fa9bff9d14bc WatchSource:0}: Error finding container f1b03d6b6b0f49ab50bcdb3528cc23a3e1574218292e25557a87fa9bff9d14bc: Status 404 returned error can't find the container with id f1b03d6b6b0f49ab50bcdb3528cc23a3e1574218292e25557a87fa9bff9d14bc Jan 31 14:46:26 crc kubenswrapper[4751]: I0131 14:46:26.429671 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10e50c97-9956-48fc-a759-6d6a2e2d8ca5" path="/var/lib/kubelet/pods/10e50c97-9956-48fc-a759-6d6a2e2d8ca5/volumes" Jan 31 14:46:26 crc kubenswrapper[4751]: I0131 14:46:26.432116 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e51033f6-0061-4b08-9d82-11c610c7d396" path="/var/lib/kubelet/pods/e51033f6-0061-4b08-9d82-11c610c7d396/volumes" Jan 31 14:46:27 crc kubenswrapper[4751]: I0131 14:46:27.234663 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" event={"ID":"4a69f6b0-3803-4184-bfd0-0fac841243c9","Type":"ContainerStarted","Data":"fe82122738b743a68b1378a4b01c84a2b7160746fd4157fd800d3764207bde57"} Jan 31 14:46:27 crc kubenswrapper[4751]: I0131 14:46:27.234707 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" event={"ID":"4a69f6b0-3803-4184-bfd0-0fac841243c9","Type":"ContainerStarted","Data":"86d14448a4ea80ec6af8481d7e6007a568a5184b3838fc04a9ee1ffb1652ee65"} Jan 31 14:46:27 crc kubenswrapper[4751]: I0131 14:46:27.236106 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" event={"ID":"63f706f0-4681-4255-9479-fa83f336faf3","Type":"ContainerStarted","Data":"2f228eb3d0a527e09dcf6548c6df7eca5ca8b08cfdbf6688de33c64dc5398abe"} Jan 31 14:46:27 crc kubenswrapper[4751]: I0131 14:46:27.236139 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" event={"ID":"63f706f0-4681-4255-9479-fa83f336faf3","Type":"ContainerStarted","Data":"f1b03d6b6b0f49ab50bcdb3528cc23a3e1574218292e25557a87fa9bff9d14bc"} Jan 31 14:46:27 crc kubenswrapper[4751]: I0131 14:46:27.236493 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:27 crc kubenswrapper[4751]: I0131 14:46:27.259065 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" podStartSLOduration=4.25904459 podStartE2EDuration="4.25904459s" podCreationTimestamp="2026-01-31 14:46:23 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:46:27.257290664 +0000 UTC m=+291.632003559" watchObservedRunningTime="2026-01-31 14:46:27.25904459 +0000 UTC m=+291.633757475" Jan 31 14:46:27 crc kubenswrapper[4751]: I0131 14:46:27.287893 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" podStartSLOduration=3.287873358 podStartE2EDuration="3.287873358s" podCreationTimestamp="2026-01-31 14:46:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:46:27.287555909 +0000 UTC m=+291.662268794" watchObservedRunningTime="2026-01-31 14:46:27.287873358 +0000 UTC m=+291.662586243" Jan 31 14:46:27 crc kubenswrapper[4751]: I0131 14:46:27.708874 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:28 crc kubenswrapper[4751]: I0131 14:46:28.242009 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:28 crc kubenswrapper[4751]: I0131 14:46:28.248313 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:36 crc kubenswrapper[4751]: I0131 14:46:36.180523 4751 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 31 14:46:43 crc kubenswrapper[4751]: I0131 14:46:43.911846 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69f964bddc-kkkr7"] Jan 31 14:46:43 crc kubenswrapper[4751]: I0131 14:46:43.912799 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" podUID="4a69f6b0-3803-4184-bfd0-0fac841243c9" containerName="controller-manager" containerID="cri-o://fe82122738b743a68b1378a4b01c84a2b7160746fd4157fd800d3764207bde57" gracePeriod=30 Jan 31 14:46:43 crc kubenswrapper[4751]: I0131 14:46:43.939851 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx"] Jan 31 14:46:43 crc kubenswrapper[4751]: I0131 14:46:43.940548 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" podUID="63f706f0-4681-4255-9479-fa83f336faf3" containerName="route-controller-manager" containerID="cri-o://2f228eb3d0a527e09dcf6548c6df7eca5ca8b08cfdbf6688de33c64dc5398abe" gracePeriod=30 Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.351709 4751 generic.go:334] "Generic (PLEG): container finished" podID="4a69f6b0-3803-4184-bfd0-0fac841243c9" containerID="fe82122738b743a68b1378a4b01c84a2b7160746fd4157fd800d3764207bde57" exitCode=0 Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.351819 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" event={"ID":"4a69f6b0-3803-4184-bfd0-0fac841243c9","Type":"ContainerDied","Data":"fe82122738b743a68b1378a4b01c84a2b7160746fd4157fd800d3764207bde57"} Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.354212 4751 generic.go:334] "Generic (PLEG): container finished" podID="63f706f0-4681-4255-9479-fa83f336faf3" containerID="2f228eb3d0a527e09dcf6548c6df7eca5ca8b08cfdbf6688de33c64dc5398abe" exitCode=0 Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.354260 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" event={"ID":"63f706f0-4681-4255-9479-fa83f336faf3","Type":"ContainerDied","Data":"2f228eb3d0a527e09dcf6548c6df7eca5ca8b08cfdbf6688de33c64dc5398abe"} Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.540968 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.547172 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.696414 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-config\") pod \"4a69f6b0-3803-4184-bfd0-0fac841243c9\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.696523 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63f706f0-4681-4255-9479-fa83f336faf3-serving-cert\") pod \"63f706f0-4681-4255-9479-fa83f336faf3\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.696629 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63f706f0-4681-4255-9479-fa83f336faf3-config\") pod \"63f706f0-4681-4255-9479-fa83f336faf3\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.696677 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-proxy-ca-bundles\") pod \"4a69f6b0-3803-4184-bfd0-0fac841243c9\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.696716 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63f706f0-4681-4255-9479-fa83f336faf3-client-ca\") pod \"63f706f0-4681-4255-9479-fa83f336faf3\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.696752 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-client-ca\") pod \"4a69f6b0-3803-4184-bfd0-0fac841243c9\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.696790 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqtdt\" (UniqueName: \"kubernetes.io/projected/63f706f0-4681-4255-9479-fa83f336faf3-kube-api-access-fqtdt\") pod \"63f706f0-4681-4255-9479-fa83f336faf3\" (UID: \"63f706f0-4681-4255-9479-fa83f336faf3\") " Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.696829 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a69f6b0-3803-4184-bfd0-0fac841243c9-serving-cert\") pod \"4a69f6b0-3803-4184-bfd0-0fac841243c9\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.696857 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-598sf\" (UniqueName: \"kubernetes.io/projected/4a69f6b0-3803-4184-bfd0-0fac841243c9-kube-api-access-598sf\") pod \"4a69f6b0-3803-4184-bfd0-0fac841243c9\" (UID: \"4a69f6b0-3803-4184-bfd0-0fac841243c9\") " Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.697630 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-client-ca" (OuterVolumeSpecName: "client-ca") pod "4a69f6b0-3803-4184-bfd0-0fac841243c9" (UID: "4a69f6b0-3803-4184-bfd0-0fac841243c9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.698046 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "4a69f6b0-3803-4184-bfd0-0fac841243c9" (UID: "4a69f6b0-3803-4184-bfd0-0fac841243c9"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.698141 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-config" (OuterVolumeSpecName: "config") pod "4a69f6b0-3803-4184-bfd0-0fac841243c9" (UID: "4a69f6b0-3803-4184-bfd0-0fac841243c9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.698267 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.698378 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.698562 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63f706f0-4681-4255-9479-fa83f336faf3-client-ca" (OuterVolumeSpecName: "client-ca") pod "63f706f0-4681-4255-9479-fa83f336faf3" (UID: "63f706f0-4681-4255-9479-fa83f336faf3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.698637 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/63f706f0-4681-4255-9479-fa83f336faf3-config" (OuterVolumeSpecName: "config") pod "63f706f0-4681-4255-9479-fa83f336faf3" (UID: "63f706f0-4681-4255-9479-fa83f336faf3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.703163 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4a69f6b0-3803-4184-bfd0-0fac841243c9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "4a69f6b0-3803-4184-bfd0-0fac841243c9" (UID: "4a69f6b0-3803-4184-bfd0-0fac841243c9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.703270 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4a69f6b0-3803-4184-bfd0-0fac841243c9-kube-api-access-598sf" (OuterVolumeSpecName: "kube-api-access-598sf") pod "4a69f6b0-3803-4184-bfd0-0fac841243c9" (UID: "4a69f6b0-3803-4184-bfd0-0fac841243c9"). InnerVolumeSpecName "kube-api-access-598sf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.703908 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63f706f0-4681-4255-9479-fa83f336faf3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "63f706f0-4681-4255-9479-fa83f336faf3" (UID: "63f706f0-4681-4255-9479-fa83f336faf3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.704351 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63f706f0-4681-4255-9479-fa83f336faf3-kube-api-access-fqtdt" (OuterVolumeSpecName: "kube-api-access-fqtdt") pod "63f706f0-4681-4255-9479-fa83f336faf3" (UID: "63f706f0-4681-4255-9479-fa83f336faf3"). InnerVolumeSpecName "kube-api-access-fqtdt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.799796 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/63f706f0-4681-4255-9479-fa83f336faf3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.799853 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqtdt\" (UniqueName: \"kubernetes.io/projected/63f706f0-4681-4255-9479-fa83f336faf3-kube-api-access-fqtdt\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.799878 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4a69f6b0-3803-4184-bfd0-0fac841243c9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.799901 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-598sf\" (UniqueName: \"kubernetes.io/projected/4a69f6b0-3803-4184-bfd0-0fac841243c9-kube-api-access-598sf\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.799949 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/4a69f6b0-3803-4184-bfd0-0fac841243c9-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.799974 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/63f706f0-4681-4255-9479-fa83f336faf3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:44 crc kubenswrapper[4751]: I0131 14:46:44.799992 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/63f706f0-4681-4255-9479-fa83f336faf3-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.190722 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp"] Jan 31 14:46:45 crc kubenswrapper[4751]: E0131 14:46:45.191287 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4a69f6b0-3803-4184-bfd0-0fac841243c9" containerName="controller-manager" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.191318 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4a69f6b0-3803-4184-bfd0-0fac841243c9" containerName="controller-manager" Jan 31 14:46:45 crc kubenswrapper[4751]: E0131 14:46:45.191351 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63f706f0-4681-4255-9479-fa83f336faf3" containerName="route-controller-manager" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.191364 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="63f706f0-4681-4255-9479-fa83f336faf3" containerName="route-controller-manager" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.191533 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4a69f6b0-3803-4184-bfd0-0fac841243c9" containerName="controller-manager" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.191570 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="63f706f0-4681-4255-9479-fa83f336faf3" containerName="route-controller-manager" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.192216 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.213156 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77bc486b6-z2pvg"] Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.215733 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.232329 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77bc486b6-z2pvg"] Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.247242 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp"] Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.308201 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-config\") pod \"controller-manager-77bc486b6-z2pvg\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.308266 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61fbf77a-1344-4a32-81b4-9a12283ace53-client-ca\") pod \"route-controller-manager-7ff7586b44-p7vhp\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.308457 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-proxy-ca-bundles\") pod \"controller-manager-77bc486b6-z2pvg\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.308528 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61fbf77a-1344-4a32-81b4-9a12283ace53-serving-cert\") pod \"route-controller-manager-7ff7586b44-p7vhp\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.308545 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjqp6\" (UniqueName: \"kubernetes.io/projected/61fbf77a-1344-4a32-81b4-9a12283ace53-kube-api-access-pjqp6\") pod \"route-controller-manager-7ff7586b44-p7vhp\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.308686 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/886a47b7-6715-4cd7-aea5-7db85b593b9b-serving-cert\") pod \"controller-manager-77bc486b6-z2pvg\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.308768 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sbxv\" (UniqueName: \"kubernetes.io/projected/886a47b7-6715-4cd7-aea5-7db85b593b9b-kube-api-access-6sbxv\") pod \"controller-manager-77bc486b6-z2pvg\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.308796 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61fbf77a-1344-4a32-81b4-9a12283ace53-config\") pod \"route-controller-manager-7ff7586b44-p7vhp\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.308839 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-client-ca\") pod \"controller-manager-77bc486b6-z2pvg\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.361853 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" event={"ID":"63f706f0-4681-4255-9479-fa83f336faf3","Type":"ContainerDied","Data":"f1b03d6b6b0f49ab50bcdb3528cc23a3e1574218292e25557a87fa9bff9d14bc"} Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.361938 4751 scope.go:117] "RemoveContainer" containerID="2f228eb3d0a527e09dcf6548c6df7eca5ca8b08cfdbf6688de33c64dc5398abe" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.362051 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.367555 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" event={"ID":"4a69f6b0-3803-4184-bfd0-0fac841243c9","Type":"ContainerDied","Data":"86d14448a4ea80ec6af8481d7e6007a568a5184b3838fc04a9ee1ffb1652ee65"} Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.367696 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f964bddc-kkkr7" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.382704 4751 scope.go:117] "RemoveContainer" containerID="fe82122738b743a68b1378a4b01c84a2b7160746fd4157fd800d3764207bde57" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.400909 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx"] Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.406274 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786458fd97-jr5xx"] Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.409768 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-proxy-ca-bundles\") pod \"controller-manager-77bc486b6-z2pvg\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.409837 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61fbf77a-1344-4a32-81b4-9a12283ace53-serving-cert\") pod \"route-controller-manager-7ff7586b44-p7vhp\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.409874 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjqp6\" (UniqueName: \"kubernetes.io/projected/61fbf77a-1344-4a32-81b4-9a12283ace53-kube-api-access-pjqp6\") pod \"route-controller-manager-7ff7586b44-p7vhp\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.409927 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/886a47b7-6715-4cd7-aea5-7db85b593b9b-serving-cert\") pod \"controller-manager-77bc486b6-z2pvg\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.409972 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6sbxv\" (UniqueName: \"kubernetes.io/projected/886a47b7-6715-4cd7-aea5-7db85b593b9b-kube-api-access-6sbxv\") pod \"controller-manager-77bc486b6-z2pvg\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.410005 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61fbf77a-1344-4a32-81b4-9a12283ace53-config\") pod \"route-controller-manager-7ff7586b44-p7vhp\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.410048 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-client-ca\") pod \"controller-manager-77bc486b6-z2pvg\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.410128 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-config\") pod \"controller-manager-77bc486b6-z2pvg\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.410158 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61fbf77a-1344-4a32-81b4-9a12283ace53-client-ca\") pod \"route-controller-manager-7ff7586b44-p7vhp\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.411567 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61fbf77a-1344-4a32-81b4-9a12283ace53-client-ca\") pod \"route-controller-manager-7ff7586b44-p7vhp\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.413267 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-proxy-ca-bundles\") pod \"controller-manager-77bc486b6-z2pvg\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.413779 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-client-ca\") pod \"controller-manager-77bc486b6-z2pvg\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.413882 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61fbf77a-1344-4a32-81b4-9a12283ace53-config\") pod \"route-controller-manager-7ff7586b44-p7vhp\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.416033 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-config\") pod \"controller-manager-77bc486b6-z2pvg\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.417760 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61fbf77a-1344-4a32-81b4-9a12283ace53-serving-cert\") pod \"route-controller-manager-7ff7586b44-p7vhp\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.421401 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-69f964bddc-kkkr7"] Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.430052 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-69f964bddc-kkkr7"] Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.444004 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/886a47b7-6715-4cd7-aea5-7db85b593b9b-serving-cert\") pod \"controller-manager-77bc486b6-z2pvg\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.445022 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sbxv\" (UniqueName: \"kubernetes.io/projected/886a47b7-6715-4cd7-aea5-7db85b593b9b-kube-api-access-6sbxv\") pod \"controller-manager-77bc486b6-z2pvg\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.447496 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjqp6\" (UniqueName: \"kubernetes.io/projected/61fbf77a-1344-4a32-81b4-9a12283ace53-kube-api-access-pjqp6\") pod \"route-controller-manager-7ff7586b44-p7vhp\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.529817 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.542448 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:45 crc kubenswrapper[4751]: I0131 14:46:45.882172 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp"] Jan 31 14:46:45 crc kubenswrapper[4751]: W0131 14:46:45.887257 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod61fbf77a_1344_4a32_81b4_9a12283ace53.slice/crio-ce28e111d2169a56020e396a7157657e7dded02c68bf85f9bd547a411d88e0d1 WatchSource:0}: Error finding container ce28e111d2169a56020e396a7157657e7dded02c68bf85f9bd547a411d88e0d1: Status 404 returned error can't find the container with id ce28e111d2169a56020e396a7157657e7dded02c68bf85f9bd547a411d88e0d1 Jan 31 14:46:46 crc kubenswrapper[4751]: I0131 14:46:46.035839 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77bc486b6-z2pvg"] Jan 31 14:46:46 crc kubenswrapper[4751]: W0131 14:46:46.043395 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod886a47b7_6715_4cd7_aea5_7db85b593b9b.slice/crio-1f155d2931dd972625ba384f618c3c395946a7bcfd8383aa5e80dfb2a1ba9412 WatchSource:0}: Error finding container 1f155d2931dd972625ba384f618c3c395946a7bcfd8383aa5e80dfb2a1ba9412: Status 404 returned error can't find the container with id 1f155d2931dd972625ba384f618c3c395946a7bcfd8383aa5e80dfb2a1ba9412 Jan 31 14:46:46 crc kubenswrapper[4751]: I0131 14:46:46.376161 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" event={"ID":"886a47b7-6715-4cd7-aea5-7db85b593b9b","Type":"ContainerStarted","Data":"4e67ee071bf6d27d2aacad78b8e2a9b1cad4b202c044eb94395ef3b85a36b3e5"} Jan 31 14:46:46 crc kubenswrapper[4751]: I0131 14:46:46.376989 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:46 crc kubenswrapper[4751]: I0131 14:46:46.377273 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" event={"ID":"886a47b7-6715-4cd7-aea5-7db85b593b9b","Type":"ContainerStarted","Data":"1f155d2931dd972625ba384f618c3c395946a7bcfd8383aa5e80dfb2a1ba9412"} Jan 31 14:46:46 crc kubenswrapper[4751]: I0131 14:46:46.379380 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" event={"ID":"61fbf77a-1344-4a32-81b4-9a12283ace53","Type":"ContainerStarted","Data":"0ddb810f9243354e528fff64b7098fda6c73e32066f11dc3cd5f6a8f0284bcdc"} Jan 31 14:46:46 crc kubenswrapper[4751]: I0131 14:46:46.379436 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" event={"ID":"61fbf77a-1344-4a32-81b4-9a12283ace53","Type":"ContainerStarted","Data":"ce28e111d2169a56020e396a7157657e7dded02c68bf85f9bd547a411d88e0d1"} Jan 31 14:46:46 crc kubenswrapper[4751]: I0131 14:46:46.379793 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:46 crc kubenswrapper[4751]: I0131 14:46:46.384666 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:46:46 crc kubenswrapper[4751]: I0131 14:46:46.419783 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4a69f6b0-3803-4184-bfd0-0fac841243c9" path="/var/lib/kubelet/pods/4a69f6b0-3803-4184-bfd0-0fac841243c9/volumes" Jan 31 14:46:46 crc kubenswrapper[4751]: I0131 14:46:46.420619 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63f706f0-4681-4255-9479-fa83f336faf3" path="/var/lib/kubelet/pods/63f706f0-4681-4255-9479-fa83f336faf3/volumes" Jan 31 14:46:46 crc kubenswrapper[4751]: I0131 14:46:46.434555 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" podStartSLOduration=3.434536821 podStartE2EDuration="3.434536821s" podCreationTimestamp="2026-01-31 14:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:46:46.404405659 +0000 UTC m=+310.779118544" watchObservedRunningTime="2026-01-31 14:46:46.434536821 +0000 UTC m=+310.809249716" Jan 31 14:46:46 crc kubenswrapper[4751]: I0131 14:46:46.457535 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" podStartSLOduration=3.457516245 podStartE2EDuration="3.457516245s" podCreationTimestamp="2026-01-31 14:46:43 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:46:46.453792577 +0000 UTC m=+310.828505462" watchObservedRunningTime="2026-01-31 14:46:46.457516245 +0000 UTC m=+310.832229130" Jan 31 14:46:46 crc kubenswrapper[4751]: I0131 14:46:46.501826 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:46:48 crc kubenswrapper[4751]: I0131 14:46:48.815655 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.813646 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5fdjn"] Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.818507 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.821278 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5fdjn"] Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.926471 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77bc486b6-z2pvg"] Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.926674 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" podUID="886a47b7-6715-4cd7-aea5-7db85b593b9b" containerName="controller-manager" containerID="cri-o://4e67ee071bf6d27d2aacad78b8e2a9b1cad4b202c044eb94395ef3b85a36b3e5" gracePeriod=30 Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.941025 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp"] Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.941264 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" podUID="61fbf77a-1344-4a32-81b4-9a12283ace53" containerName="route-controller-manager" containerID="cri-o://0ddb810f9243354e528fff64b7098fda6c73e32066f11dc3cd5f6a8f0284bcdc" gracePeriod=30 Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.959888 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-trusted-ca\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.959933 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.959955 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8m8k\" (UniqueName: \"kubernetes.io/projected/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-kube-api-access-z8m8k\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.959986 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.960083 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-registry-certificates\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.960135 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-registry-tls\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.960159 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-bound-sa-token\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.960225 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:03 crc kubenswrapper[4751]: I0131 14:47:03.982065 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.065920 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8m8k\" (UniqueName: \"kubernetes.io/projected/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-kube-api-access-z8m8k\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.065992 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-registry-certificates\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.066013 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-registry-tls\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.066044 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-bound-sa-token\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.066123 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.066187 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-trusted-ca\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.066208 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.067676 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-registry-certificates\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.071136 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-trusted-ca\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.073869 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-ca-trust-extracted\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.074573 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-registry-tls\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.077190 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-installation-pull-secrets\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.088343 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8m8k\" (UniqueName: \"kubernetes.io/projected/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-kube-api-access-z8m8k\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.090471 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e5d9ae7-378c-4f07-9d25-d1b3d187bde9-bound-sa-token\") pod \"image-registry-66df7c8f76-5fdjn\" (UID: \"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9\") " pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.141001 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.459266 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.488171 4751 generic.go:334] "Generic (PLEG): container finished" podID="886a47b7-6715-4cd7-aea5-7db85b593b9b" containerID="4e67ee071bf6d27d2aacad78b8e2a9b1cad4b202c044eb94395ef3b85a36b3e5" exitCode=0 Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.488269 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" event={"ID":"886a47b7-6715-4cd7-aea5-7db85b593b9b","Type":"ContainerDied","Data":"4e67ee071bf6d27d2aacad78b8e2a9b1cad4b202c044eb94395ef3b85a36b3e5"} Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.492040 4751 generic.go:334] "Generic (PLEG): container finished" podID="61fbf77a-1344-4a32-81b4-9a12283ace53" containerID="0ddb810f9243354e528fff64b7098fda6c73e32066f11dc3cd5f6a8f0284bcdc" exitCode=0 Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.492105 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.492108 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" event={"ID":"61fbf77a-1344-4a32-81b4-9a12283ace53","Type":"ContainerDied","Data":"0ddb810f9243354e528fff64b7098fda6c73e32066f11dc3cd5f6a8f0284bcdc"} Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.492157 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp" event={"ID":"61fbf77a-1344-4a32-81b4-9a12283ace53","Type":"ContainerDied","Data":"ce28e111d2169a56020e396a7157657e7dded02c68bf85f9bd547a411d88e0d1"} Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.492177 4751 scope.go:117] "RemoveContainer" containerID="0ddb810f9243354e528fff64b7098fda6c73e32066f11dc3cd5f6a8f0284bcdc" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.524565 4751 scope.go:117] "RemoveContainer" containerID="0ddb810f9243354e528fff64b7098fda6c73e32066f11dc3cd5f6a8f0284bcdc" Jan 31 14:47:04 crc kubenswrapper[4751]: E0131 14:47:04.524953 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ddb810f9243354e528fff64b7098fda6c73e32066f11dc3cd5f6a8f0284bcdc\": container with ID starting with 0ddb810f9243354e528fff64b7098fda6c73e32066f11dc3cd5f6a8f0284bcdc not found: ID does not exist" containerID="0ddb810f9243354e528fff64b7098fda6c73e32066f11dc3cd5f6a8f0284bcdc" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.525002 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ddb810f9243354e528fff64b7098fda6c73e32066f11dc3cd5f6a8f0284bcdc"} err="failed to get container status \"0ddb810f9243354e528fff64b7098fda6c73e32066f11dc3cd5f6a8f0284bcdc\": rpc error: code = NotFound desc = could not find container \"0ddb810f9243354e528fff64b7098fda6c73e32066f11dc3cd5f6a8f0284bcdc\": container with ID starting with 0ddb810f9243354e528fff64b7098fda6c73e32066f11dc3cd5f6a8f0284bcdc not found: ID does not exist" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.573350 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61fbf77a-1344-4a32-81b4-9a12283ace53-client-ca\") pod \"61fbf77a-1344-4a32-81b4-9a12283ace53\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.573523 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61fbf77a-1344-4a32-81b4-9a12283ace53-serving-cert\") pod \"61fbf77a-1344-4a32-81b4-9a12283ace53\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.573583 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjqp6\" (UniqueName: \"kubernetes.io/projected/61fbf77a-1344-4a32-81b4-9a12283ace53-kube-api-access-pjqp6\") pod \"61fbf77a-1344-4a32-81b4-9a12283ace53\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.573695 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61fbf77a-1344-4a32-81b4-9a12283ace53-config\") pod \"61fbf77a-1344-4a32-81b4-9a12283ace53\" (UID: \"61fbf77a-1344-4a32-81b4-9a12283ace53\") " Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.574185 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61fbf77a-1344-4a32-81b4-9a12283ace53-client-ca" (OuterVolumeSpecName: "client-ca") pod "61fbf77a-1344-4a32-81b4-9a12283ace53" (UID: "61fbf77a-1344-4a32-81b4-9a12283ace53"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.574815 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61fbf77a-1344-4a32-81b4-9a12283ace53-config" (OuterVolumeSpecName: "config") pod "61fbf77a-1344-4a32-81b4-9a12283ace53" (UID: "61fbf77a-1344-4a32-81b4-9a12283ace53"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.578746 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61fbf77a-1344-4a32-81b4-9a12283ace53-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "61fbf77a-1344-4a32-81b4-9a12283ace53" (UID: "61fbf77a-1344-4a32-81b4-9a12283ace53"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.579470 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61fbf77a-1344-4a32-81b4-9a12283ace53-kube-api-access-pjqp6" (OuterVolumeSpecName: "kube-api-access-pjqp6") pod "61fbf77a-1344-4a32-81b4-9a12283ace53" (UID: "61fbf77a-1344-4a32-81b4-9a12283ace53"). InnerVolumeSpecName "kube-api-access-pjqp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.592548 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.617091 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-5fdjn"] Jan 31 14:47:04 crc kubenswrapper[4751]: W0131 14:47:04.632298 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9e5d9ae7_378c_4f07_9d25_d1b3d187bde9.slice/crio-aa513204eefb20e434ea51bebfe6e79a0009c66d211ca5b79c4bd783529f6928 WatchSource:0}: Error finding container aa513204eefb20e434ea51bebfe6e79a0009c66d211ca5b79c4bd783529f6928: Status 404 returned error can't find the container with id aa513204eefb20e434ea51bebfe6e79a0009c66d211ca5b79c4bd783529f6928 Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.675777 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-client-ca\") pod \"886a47b7-6715-4cd7-aea5-7db85b593b9b\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.676210 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-proxy-ca-bundles\") pod \"886a47b7-6715-4cd7-aea5-7db85b593b9b\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.676294 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/886a47b7-6715-4cd7-aea5-7db85b593b9b-serving-cert\") pod \"886a47b7-6715-4cd7-aea5-7db85b593b9b\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.676331 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-config\") pod \"886a47b7-6715-4cd7-aea5-7db85b593b9b\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.676347 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sbxv\" (UniqueName: \"kubernetes.io/projected/886a47b7-6715-4cd7-aea5-7db85b593b9b-kube-api-access-6sbxv\") pod \"886a47b7-6715-4cd7-aea5-7db85b593b9b\" (UID: \"886a47b7-6715-4cd7-aea5-7db85b593b9b\") " Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.676555 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/61fbf77a-1344-4a32-81b4-9a12283ace53-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.676567 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/61fbf77a-1344-4a32-81b4-9a12283ace53-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.676575 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjqp6\" (UniqueName: \"kubernetes.io/projected/61fbf77a-1344-4a32-81b4-9a12283ace53-kube-api-access-pjqp6\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.676585 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/61fbf77a-1344-4a32-81b4-9a12283ace53-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.677330 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-client-ca" (OuterVolumeSpecName: "client-ca") pod "886a47b7-6715-4cd7-aea5-7db85b593b9b" (UID: "886a47b7-6715-4cd7-aea5-7db85b593b9b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.677384 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-config" (OuterVolumeSpecName: "config") pod "886a47b7-6715-4cd7-aea5-7db85b593b9b" (UID: "886a47b7-6715-4cd7-aea5-7db85b593b9b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.678226 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "886a47b7-6715-4cd7-aea5-7db85b593b9b" (UID: "886a47b7-6715-4cd7-aea5-7db85b593b9b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.680186 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886a47b7-6715-4cd7-aea5-7db85b593b9b-kube-api-access-6sbxv" (OuterVolumeSpecName: "kube-api-access-6sbxv") pod "886a47b7-6715-4cd7-aea5-7db85b593b9b" (UID: "886a47b7-6715-4cd7-aea5-7db85b593b9b"). InnerVolumeSpecName "kube-api-access-6sbxv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.689510 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/886a47b7-6715-4cd7-aea5-7db85b593b9b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "886a47b7-6715-4cd7-aea5-7db85b593b9b" (UID: "886a47b7-6715-4cd7-aea5-7db85b593b9b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.777189 4751 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.777242 4751 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.777252 4751 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/886a47b7-6715-4cd7-aea5-7db85b593b9b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.777261 4751 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/886a47b7-6715-4cd7-aea5-7db85b593b9b-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.777270 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6sbxv\" (UniqueName: \"kubernetes.io/projected/886a47b7-6715-4cd7-aea5-7db85b593b9b-kube-api-access-6sbxv\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.825459 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp"] Jan 31 14:47:04 crc kubenswrapper[4751]: I0131 14:47:04.829308 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7ff7586b44-p7vhp"] Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.202924 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-69f964bddc-w29sc"] Jan 31 14:47:05 crc kubenswrapper[4751]: E0131 14:47:05.203305 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886a47b7-6715-4cd7-aea5-7db85b593b9b" containerName="controller-manager" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.203350 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="886a47b7-6715-4cd7-aea5-7db85b593b9b" containerName="controller-manager" Jan 31 14:47:05 crc kubenswrapper[4751]: E0131 14:47:05.203379 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61fbf77a-1344-4a32-81b4-9a12283ace53" containerName="route-controller-manager" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.203393 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="61fbf77a-1344-4a32-81b4-9a12283ace53" containerName="route-controller-manager" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.203553 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="61fbf77a-1344-4a32-81b4-9a12283ace53" containerName="route-controller-manager" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.203581 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="886a47b7-6715-4cd7-aea5-7db85b593b9b" containerName="controller-manager" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.204172 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.212016 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh"] Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.212995 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.217703 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.218178 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.218397 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.218606 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.218816 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.219252 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.233951 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69f964bddc-w29sc"] Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.243770 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh"] Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.283572 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20655fbe-0d1c-451d-8ab2-1b8e3423fbcd-client-ca\") pod \"route-controller-manager-786458fd97-vccrh\" (UID: \"20655fbe-0d1c-451d-8ab2-1b8e3423fbcd\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.283654 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8efbc4e-7e88-4914-8141-8b93ace1dcb0-config\") pod \"controller-manager-69f964bddc-w29sc\" (UID: \"d8efbc4e-7e88-4914-8141-8b93ace1dcb0\") " pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.283713 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20655fbe-0d1c-451d-8ab2-1b8e3423fbcd-serving-cert\") pod \"route-controller-manager-786458fd97-vccrh\" (UID: \"20655fbe-0d1c-451d-8ab2-1b8e3423fbcd\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.283757 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8efbc4e-7e88-4914-8141-8b93ace1dcb0-client-ca\") pod \"controller-manager-69f964bddc-w29sc\" (UID: \"d8efbc4e-7e88-4914-8141-8b93ace1dcb0\") " pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.283810 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szbhb\" (UniqueName: \"kubernetes.io/projected/d8efbc4e-7e88-4914-8141-8b93ace1dcb0-kube-api-access-szbhb\") pod \"controller-manager-69f964bddc-w29sc\" (UID: \"d8efbc4e-7e88-4914-8141-8b93ace1dcb0\") " pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.283855 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20655fbe-0d1c-451d-8ab2-1b8e3423fbcd-config\") pod \"route-controller-manager-786458fd97-vccrh\" (UID: \"20655fbe-0d1c-451d-8ab2-1b8e3423fbcd\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.283886 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8efbc4e-7e88-4914-8141-8b93ace1dcb0-serving-cert\") pod \"controller-manager-69f964bddc-w29sc\" (UID: \"d8efbc4e-7e88-4914-8141-8b93ace1dcb0\") " pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.283922 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8efbc4e-7e88-4914-8141-8b93ace1dcb0-proxy-ca-bundles\") pod \"controller-manager-69f964bddc-w29sc\" (UID: \"d8efbc4e-7e88-4914-8141-8b93ace1dcb0\") " pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.283971 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pnljq\" (UniqueName: \"kubernetes.io/projected/20655fbe-0d1c-451d-8ab2-1b8e3423fbcd-kube-api-access-pnljq\") pod \"route-controller-manager-786458fd97-vccrh\" (UID: \"20655fbe-0d1c-451d-8ab2-1b8e3423fbcd\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.384782 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szbhb\" (UniqueName: \"kubernetes.io/projected/d8efbc4e-7e88-4914-8141-8b93ace1dcb0-kube-api-access-szbhb\") pod \"controller-manager-69f964bddc-w29sc\" (UID: \"d8efbc4e-7e88-4914-8141-8b93ace1dcb0\") " pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.384836 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20655fbe-0d1c-451d-8ab2-1b8e3423fbcd-config\") pod \"route-controller-manager-786458fd97-vccrh\" (UID: \"20655fbe-0d1c-451d-8ab2-1b8e3423fbcd\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.384869 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8efbc4e-7e88-4914-8141-8b93ace1dcb0-serving-cert\") pod \"controller-manager-69f964bddc-w29sc\" (UID: \"d8efbc4e-7e88-4914-8141-8b93ace1dcb0\") " pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.384889 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8efbc4e-7e88-4914-8141-8b93ace1dcb0-proxy-ca-bundles\") pod \"controller-manager-69f964bddc-w29sc\" (UID: \"d8efbc4e-7e88-4914-8141-8b93ace1dcb0\") " pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.384917 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pnljq\" (UniqueName: \"kubernetes.io/projected/20655fbe-0d1c-451d-8ab2-1b8e3423fbcd-kube-api-access-pnljq\") pod \"route-controller-manager-786458fd97-vccrh\" (UID: \"20655fbe-0d1c-451d-8ab2-1b8e3423fbcd\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.384940 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20655fbe-0d1c-451d-8ab2-1b8e3423fbcd-client-ca\") pod \"route-controller-manager-786458fd97-vccrh\" (UID: \"20655fbe-0d1c-451d-8ab2-1b8e3423fbcd\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.384960 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8efbc4e-7e88-4914-8141-8b93ace1dcb0-config\") pod \"controller-manager-69f964bddc-w29sc\" (UID: \"d8efbc4e-7e88-4914-8141-8b93ace1dcb0\") " pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.384984 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20655fbe-0d1c-451d-8ab2-1b8e3423fbcd-serving-cert\") pod \"route-controller-manager-786458fd97-vccrh\" (UID: \"20655fbe-0d1c-451d-8ab2-1b8e3423fbcd\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.385008 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8efbc4e-7e88-4914-8141-8b93ace1dcb0-client-ca\") pod \"controller-manager-69f964bddc-w29sc\" (UID: \"d8efbc4e-7e88-4914-8141-8b93ace1dcb0\") " pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.386984 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d8efbc4e-7e88-4914-8141-8b93ace1dcb0-client-ca\") pod \"controller-manager-69f964bddc-w29sc\" (UID: \"d8efbc4e-7e88-4914-8141-8b93ace1dcb0\") " pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.387170 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d8efbc4e-7e88-4914-8141-8b93ace1dcb0-config\") pod \"controller-manager-69f964bddc-w29sc\" (UID: \"d8efbc4e-7e88-4914-8141-8b93ace1dcb0\") " pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.387592 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/20655fbe-0d1c-451d-8ab2-1b8e3423fbcd-client-ca\") pod \"route-controller-manager-786458fd97-vccrh\" (UID: \"20655fbe-0d1c-451d-8ab2-1b8e3423fbcd\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.388085 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/d8efbc4e-7e88-4914-8141-8b93ace1dcb0-proxy-ca-bundles\") pod \"controller-manager-69f964bddc-w29sc\" (UID: \"d8efbc4e-7e88-4914-8141-8b93ace1dcb0\") " pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.388372 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/20655fbe-0d1c-451d-8ab2-1b8e3423fbcd-config\") pod \"route-controller-manager-786458fd97-vccrh\" (UID: \"20655fbe-0d1c-451d-8ab2-1b8e3423fbcd\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.392786 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d8efbc4e-7e88-4914-8141-8b93ace1dcb0-serving-cert\") pod \"controller-manager-69f964bddc-w29sc\" (UID: \"d8efbc4e-7e88-4914-8141-8b93ace1dcb0\") " pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.392951 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/20655fbe-0d1c-451d-8ab2-1b8e3423fbcd-serving-cert\") pod \"route-controller-manager-786458fd97-vccrh\" (UID: \"20655fbe-0d1c-451d-8ab2-1b8e3423fbcd\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.407486 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pnljq\" (UniqueName: \"kubernetes.io/projected/20655fbe-0d1c-451d-8ab2-1b8e3423fbcd-kube-api-access-pnljq\") pod \"route-controller-manager-786458fd97-vccrh\" (UID: \"20655fbe-0d1c-451d-8ab2-1b8e3423fbcd\") " pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.410091 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szbhb\" (UniqueName: \"kubernetes.io/projected/d8efbc4e-7e88-4914-8141-8b93ace1dcb0-kube-api-access-szbhb\") pod \"controller-manager-69f964bddc-w29sc\" (UID: \"d8efbc4e-7e88-4914-8141-8b93ace1dcb0\") " pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.501161 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" event={"ID":"886a47b7-6715-4cd7-aea5-7db85b593b9b","Type":"ContainerDied","Data":"1f155d2931dd972625ba384f618c3c395946a7bcfd8383aa5e80dfb2a1ba9412"} Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.501191 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77bc486b6-z2pvg" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.501211 4751 scope.go:117] "RemoveContainer" containerID="4e67ee071bf6d27d2aacad78b8e2a9b1cad4b202c044eb94395ef3b85a36b3e5" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.505167 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" event={"ID":"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9","Type":"ContainerStarted","Data":"2a23d20731bcc1295cc79827deb93850e328eee568b010969dd169117fad03dd"} Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.505507 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" event={"ID":"9e5d9ae7-378c-4f07-9d25-d1b3d187bde9","Type":"ContainerStarted","Data":"aa513204eefb20e434ea51bebfe6e79a0009c66d211ca5b79c4bd783529f6928"} Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.505547 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.526523 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" podStartSLOduration=2.52649724 podStartE2EDuration="2.52649724s" podCreationTimestamp="2026-01-31 14:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:47:05.523149747 +0000 UTC m=+329.897862662" watchObservedRunningTime="2026-01-31 14:47:05.52649724 +0000 UTC m=+329.901210165" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.537480 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.545474 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.569806 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-77bc486b6-z2pvg"] Jan 31 14:47:05 crc kubenswrapper[4751]: I0131 14:47:05.573388 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-77bc486b6-z2pvg"] Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.023127 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh"] Jan 31 14:47:06 crc kubenswrapper[4751]: W0131 14:47:06.031675 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod20655fbe_0d1c_451d_8ab2_1b8e3423fbcd.slice/crio-42e78426b9ee75e568c2050d6a332e4005abb4b37e408c567550901cb461109e WatchSource:0}: Error finding container 42e78426b9ee75e568c2050d6a332e4005abb4b37e408c567550901cb461109e: Status 404 returned error can't find the container with id 42e78426b9ee75e568c2050d6a332e4005abb4b37e408c567550901cb461109e Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.085647 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-69f964bddc-w29sc"] Jan 31 14:47:06 crc kubenswrapper[4751]: W0131 14:47:06.094346 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd8efbc4e_7e88_4914_8141_8b93ace1dcb0.slice/crio-c34e6be9317685ffc3d68cf782fa0094bfb98f5274e80b311fa44888e6c0023b WatchSource:0}: Error finding container c34e6be9317685ffc3d68cf782fa0094bfb98f5274e80b311fa44888e6c0023b: Status 404 returned error can't find the container with id c34e6be9317685ffc3d68cf782fa0094bfb98f5274e80b311fa44888e6c0023b Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.336199 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m4m6r"] Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.336910 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-m4m6r" podUID="8d5f1383-42d7-47a1-9e47-8dba038241d2" containerName="registry-server" containerID="cri-o://eabca8f8fcdbfb2f04b488498b2a615e9946a5ba739f9fb75c570ef168f4bcd8" gracePeriod=30 Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.352689 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wcnsn"] Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.352916 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-wcnsn" podUID="074619b7-9220-4377-b93d-6088199a5e16" containerName="registry-server" containerID="cri-o://362555e9a4bda60e895d4cff8fad32fbdb6800b24c4a8d8deeb2ac026aebcc1b" gracePeriod=30 Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.362951 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5r6kv"] Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.363216 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" podUID="8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea" containerName="marketplace-operator" containerID="cri-o://cc163d448fa8fad6b5ab0077c0960c4003a53c503f6d097090f206fed6245a22" gracePeriod=30 Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.371923 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2xfl"] Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.372251 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-k2xfl" podUID="e656c7af-fbd9-4e9c-ae61-d4142d37c89f" containerName="registry-server" containerID="cri-o://3bb7101aeb47dd5d5b9aa6ef1075e32a424c360c1ebaa7fd0787c20e4303f647" gracePeriod=30 Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.386167 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jv94g"] Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.387092 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.393822 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gktqp"] Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.394153 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-gktqp" podUID="0cfb2e52-7371-4d38-994c-92b5b7d123cc" containerName="registry-server" containerID="cri-o://632dd7cf21c157e19b8b506aada8a2a0cc9ee7a4c7089d92374fc5dc9f67b1ea" gracePeriod=30 Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.422410 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61fbf77a-1344-4a32-81b4-9a12283ace53" path="/var/lib/kubelet/pods/61fbf77a-1344-4a32-81b4-9a12283ace53/volumes" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.423454 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="886a47b7-6715-4cd7-aea5-7db85b593b9b" path="/var/lib/kubelet/pods/886a47b7-6715-4cd7-aea5-7db85b593b9b/volumes" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.424529 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jv94g"] Jan 31 14:47:06 crc kubenswrapper[4751]: E0131 14:47:06.441543 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 362555e9a4bda60e895d4cff8fad32fbdb6800b24c4a8d8deeb2ac026aebcc1b is running failed: container process not found" containerID="362555e9a4bda60e895d4cff8fad32fbdb6800b24c4a8d8deeb2ac026aebcc1b" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 14:47:06 crc kubenswrapper[4751]: E0131 14:47:06.442168 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 362555e9a4bda60e895d4cff8fad32fbdb6800b24c4a8d8deeb2ac026aebcc1b is running failed: container process not found" containerID="362555e9a4bda60e895d4cff8fad32fbdb6800b24c4a8d8deeb2ac026aebcc1b" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 14:47:06 crc kubenswrapper[4751]: E0131 14:47:06.442549 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 362555e9a4bda60e895d4cff8fad32fbdb6800b24c4a8d8deeb2ac026aebcc1b is running failed: container process not found" containerID="362555e9a4bda60e895d4cff8fad32fbdb6800b24c4a8d8deeb2ac026aebcc1b" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 14:47:06 crc kubenswrapper[4751]: E0131 14:47:06.442583 4751 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 362555e9a4bda60e895d4cff8fad32fbdb6800b24c4a8d8deeb2ac026aebcc1b is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/community-operators-wcnsn" podUID="074619b7-9220-4377-b93d-6088199a5e16" containerName="registry-server" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.498507 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9853dd16-26f9-4fe4-9468-52d39dd4dd1f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jv94g\" (UID: \"9853dd16-26f9-4fe4-9468-52d39dd4dd1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.498567 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x7sz\" (UniqueName: \"kubernetes.io/projected/9853dd16-26f9-4fe4-9468-52d39dd4dd1f-kube-api-access-8x7sz\") pod \"marketplace-operator-79b997595-jv94g\" (UID: \"9853dd16-26f9-4fe4-9468-52d39dd4dd1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.498590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9853dd16-26f9-4fe4-9468-52d39dd4dd1f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jv94g\" (UID: \"9853dd16-26f9-4fe4-9468-52d39dd4dd1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.527645 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" event={"ID":"20655fbe-0d1c-451d-8ab2-1b8e3423fbcd","Type":"ContainerStarted","Data":"6d8bfb291d86aed9721a7701126d432d5dd1555a7bc160fc6cddff2dc085284f"} Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.527687 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" event={"ID":"20655fbe-0d1c-451d-8ab2-1b8e3423fbcd","Type":"ContainerStarted","Data":"42e78426b9ee75e568c2050d6a332e4005abb4b37e408c567550901cb461109e"} Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.528395 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.545688 4751 generic.go:334] "Generic (PLEG): container finished" podID="8d5f1383-42d7-47a1-9e47-8dba038241d2" containerID="eabca8f8fcdbfb2f04b488498b2a615e9946a5ba739f9fb75c570ef168f4bcd8" exitCode=0 Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.545784 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4m6r" event={"ID":"8d5f1383-42d7-47a1-9e47-8dba038241d2","Type":"ContainerDied","Data":"eabca8f8fcdbfb2f04b488498b2a615e9946a5ba739f9fb75c570ef168f4bcd8"} Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.570663 4751 generic.go:334] "Generic (PLEG): container finished" podID="e656c7af-fbd9-4e9c-ae61-d4142d37c89f" containerID="3bb7101aeb47dd5d5b9aa6ef1075e32a424c360c1ebaa7fd0787c20e4303f647" exitCode=0 Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.570757 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2xfl" event={"ID":"e656c7af-fbd9-4e9c-ae61-d4142d37c89f","Type":"ContainerDied","Data":"3bb7101aeb47dd5d5b9aa6ef1075e32a424c360c1ebaa7fd0787c20e4303f647"} Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.600803 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x7sz\" (UniqueName: \"kubernetes.io/projected/9853dd16-26f9-4fe4-9468-52d39dd4dd1f-kube-api-access-8x7sz\") pod \"marketplace-operator-79b997595-jv94g\" (UID: \"9853dd16-26f9-4fe4-9468-52d39dd4dd1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.600859 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9853dd16-26f9-4fe4-9468-52d39dd4dd1f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jv94g\" (UID: \"9853dd16-26f9-4fe4-9468-52d39dd4dd1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.600965 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9853dd16-26f9-4fe4-9468-52d39dd4dd1f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jv94g\" (UID: \"9853dd16-26f9-4fe4-9468-52d39dd4dd1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.602917 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9853dd16-26f9-4fe4-9468-52d39dd4dd1f-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-jv94g\" (UID: \"9853dd16-26f9-4fe4-9468-52d39dd4dd1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.622143 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" event={"ID":"d8efbc4e-7e88-4914-8141-8b93ace1dcb0","Type":"ContainerStarted","Data":"1b42a0bcb0990937c0f4a17f1eafa43df652100c854be4cc03e56963d9f512df"} Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.622187 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" event={"ID":"d8efbc4e-7e88-4914-8141-8b93ace1dcb0","Type":"ContainerStarted","Data":"c34e6be9317685ffc3d68cf782fa0094bfb98f5274e80b311fa44888e6c0023b"} Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.623088 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.627209 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/9853dd16-26f9-4fe4-9468-52d39dd4dd1f-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-jv94g\" (UID: \"9853dd16-26f9-4fe4-9468-52d39dd4dd1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.639733 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x7sz\" (UniqueName: \"kubernetes.io/projected/9853dd16-26f9-4fe4-9468-52d39dd4dd1f-kube-api-access-8x7sz\") pod \"marketplace-operator-79b997595-jv94g\" (UID: \"9853dd16-26f9-4fe4-9468-52d39dd4dd1f\") " pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" Jan 31 14:47:06 crc kubenswrapper[4751]: E0131 14:47:06.649732 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eabca8f8fcdbfb2f04b488498b2a615e9946a5ba739f9fb75c570ef168f4bcd8 is running failed: container process not found" containerID="eabca8f8fcdbfb2f04b488498b2a615e9946a5ba739f9fb75c570ef168f4bcd8" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 14:47:06 crc kubenswrapper[4751]: E0131 14:47:06.650463 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eabca8f8fcdbfb2f04b488498b2a615e9946a5ba739f9fb75c570ef168f4bcd8 is running failed: container process not found" containerID="eabca8f8fcdbfb2f04b488498b2a615e9946a5ba739f9fb75c570ef168f4bcd8" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.651475 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" podStartSLOduration=3.65146463 podStartE2EDuration="3.65146463s" podCreationTimestamp="2026-01-31 14:47:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:47:06.650882653 +0000 UTC m=+331.025595538" watchObservedRunningTime="2026-01-31 14:47:06.65146463 +0000 UTC m=+331.026177515" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.653520 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" podStartSLOduration=2.653513287 podStartE2EDuration="2.653513287s" podCreationTimestamp="2026-01-31 14:47:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:47:06.581376744 +0000 UTC m=+330.956089629" watchObservedRunningTime="2026-01-31 14:47:06.653513287 +0000 UTC m=+331.028226172" Jan 31 14:47:06 crc kubenswrapper[4751]: E0131 14:47:06.655603 4751 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eabca8f8fcdbfb2f04b488498b2a615e9946a5ba739f9fb75c570ef168f4bcd8 is running failed: container process not found" containerID="eabca8f8fcdbfb2f04b488498b2a615e9946a5ba739f9fb75c570ef168f4bcd8" cmd=["grpc_health_probe","-addr=:50051"] Jan 31 14:47:06 crc kubenswrapper[4751]: E0131 14:47:06.655664 4751 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of eabca8f8fcdbfb2f04b488498b2a615e9946a5ba739f9fb75c570ef168f4bcd8 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-m4m6r" podUID="8d5f1383-42d7-47a1-9e47-8dba038241d2" containerName="registry-server" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.672380 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-69f964bddc-w29sc" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.681857 4751 generic.go:334] "Generic (PLEG): container finished" podID="8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea" containerID="cc163d448fa8fad6b5ab0077c0960c4003a53c503f6d097090f206fed6245a22" exitCode=0 Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.681913 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" event={"ID":"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea","Type":"ContainerDied","Data":"cc163d448fa8fad6b5ab0077c0960c4003a53c503f6d097090f206fed6245a22"} Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.692893 4751 generic.go:334] "Generic (PLEG): container finished" podID="074619b7-9220-4377-b93d-6088199a5e16" containerID="362555e9a4bda60e895d4cff8fad32fbdb6800b24c4a8d8deeb2ac026aebcc1b" exitCode=0 Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.693043 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcnsn" event={"ID":"074619b7-9220-4377-b93d-6088199a5e16","Type":"ContainerDied","Data":"362555e9a4bda60e895d4cff8fad32fbdb6800b24c4a8d8deeb2ac026aebcc1b"} Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.704940 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.817294 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-786458fd97-vccrh" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.946320 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" Jan 31 14:47:06 crc kubenswrapper[4751]: I0131 14:47:06.997510 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.010458 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdrlx\" (UniqueName: \"kubernetes.io/projected/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-kube-api-access-qdrlx\") pod \"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea\" (UID: \"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea\") " Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.010503 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-marketplace-trusted-ca\") pod \"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea\" (UID: \"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea\") " Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.010543 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-marketplace-operator-metrics\") pod \"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea\" (UID: \"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea\") " Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.012345 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea" (UID: "8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.017492 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-kube-api-access-qdrlx" (OuterVolumeSpecName: "kube-api-access-qdrlx") pod "8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea" (UID: "8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea"). InnerVolumeSpecName "kube-api-access-qdrlx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.034853 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea" (UID: "8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.091350 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.111514 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-utilities\") pod \"e656c7af-fbd9-4e9c-ae61-d4142d37c89f\" (UID: \"e656c7af-fbd9-4e9c-ae61-d4142d37c89f\") " Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.111636 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-catalog-content\") pod \"e656c7af-fbd9-4e9c-ae61-d4142d37c89f\" (UID: \"e656c7af-fbd9-4e9c-ae61-d4142d37c89f\") " Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.111670 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wkxqv\" (UniqueName: \"kubernetes.io/projected/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-kube-api-access-wkxqv\") pod \"e656c7af-fbd9-4e9c-ae61-d4142d37c89f\" (UID: \"e656c7af-fbd9-4e9c-ae61-d4142d37c89f\") " Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.111883 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.111900 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qdrlx\" (UniqueName: \"kubernetes.io/projected/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-kube-api-access-qdrlx\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.111909 4751 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.113988 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-utilities" (OuterVolumeSpecName: "utilities") pod "e656c7af-fbd9-4e9c-ae61-d4142d37c89f" (UID: "e656c7af-fbd9-4e9c-ae61-d4142d37c89f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.115057 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-kube-api-access-wkxqv" (OuterVolumeSpecName: "kube-api-access-wkxqv") pod "e656c7af-fbd9-4e9c-ae61-d4142d37c89f" (UID: "e656c7af-fbd9-4e9c-ae61-d4142d37c89f"). InnerVolumeSpecName "kube-api-access-wkxqv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.138820 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e656c7af-fbd9-4e9c-ae61-d4142d37c89f" (UID: "e656c7af-fbd9-4e9c-ae61-d4142d37c89f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.175095 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.180298 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.213971 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgf8m\" (UniqueName: \"kubernetes.io/projected/0cfb2e52-7371-4d38-994c-92b5b7d123cc-kube-api-access-qgf8m\") pod \"0cfb2e52-7371-4d38-994c-92b5b7d123cc\" (UID: \"0cfb2e52-7371-4d38-994c-92b5b7d123cc\") " Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.214085 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5f1383-42d7-47a1-9e47-8dba038241d2-catalog-content\") pod \"8d5f1383-42d7-47a1-9e47-8dba038241d2\" (UID: \"8d5f1383-42d7-47a1-9e47-8dba038241d2\") " Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.215616 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cfb2e52-7371-4d38-994c-92b5b7d123cc-utilities\") pod \"0cfb2e52-7371-4d38-994c-92b5b7d123cc\" (UID: \"0cfb2e52-7371-4d38-994c-92b5b7d123cc\") " Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.220765 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cfb2e52-7371-4d38-994c-92b5b7d123cc-catalog-content\") pod \"0cfb2e52-7371-4d38-994c-92b5b7d123cc\" (UID: \"0cfb2e52-7371-4d38-994c-92b5b7d123cc\") " Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.220848 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-566b8\" (UniqueName: \"kubernetes.io/projected/8d5f1383-42d7-47a1-9e47-8dba038241d2-kube-api-access-566b8\") pod \"8d5f1383-42d7-47a1-9e47-8dba038241d2\" (UID: \"8d5f1383-42d7-47a1-9e47-8dba038241d2\") " Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.220880 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pzp7l\" (UniqueName: \"kubernetes.io/projected/074619b7-9220-4377-b93d-6088199a5e16-kube-api-access-pzp7l\") pod \"074619b7-9220-4377-b93d-6088199a5e16\" (UID: \"074619b7-9220-4377-b93d-6088199a5e16\") " Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.221378 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5f1383-42d7-47a1-9e47-8dba038241d2-utilities\") pod \"8d5f1383-42d7-47a1-9e47-8dba038241d2\" (UID: \"8d5f1383-42d7-47a1-9e47-8dba038241d2\") " Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.221490 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/074619b7-9220-4377-b93d-6088199a5e16-catalog-content\") pod \"074619b7-9220-4377-b93d-6088199a5e16\" (UID: \"074619b7-9220-4377-b93d-6088199a5e16\") " Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.221552 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/074619b7-9220-4377-b93d-6088199a5e16-utilities\") pod \"074619b7-9220-4377-b93d-6088199a5e16\" (UID: \"074619b7-9220-4377-b93d-6088199a5e16\") " Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.222051 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.222096 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wkxqv\" (UniqueName: \"kubernetes.io/projected/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-kube-api-access-wkxqv\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.222110 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e656c7af-fbd9-4e9c-ae61-d4142d37c89f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.222206 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d5f1383-42d7-47a1-9e47-8dba038241d2-utilities" (OuterVolumeSpecName: "utilities") pod "8d5f1383-42d7-47a1-9e47-8dba038241d2" (UID: "8d5f1383-42d7-47a1-9e47-8dba038241d2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.222420 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/074619b7-9220-4377-b93d-6088199a5e16-utilities" (OuterVolumeSpecName: "utilities") pod "074619b7-9220-4377-b93d-6088199a5e16" (UID: "074619b7-9220-4377-b93d-6088199a5e16"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.222497 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cfb2e52-7371-4d38-994c-92b5b7d123cc-utilities" (OuterVolumeSpecName: "utilities") pod "0cfb2e52-7371-4d38-994c-92b5b7d123cc" (UID: "0cfb2e52-7371-4d38-994c-92b5b7d123cc"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.231170 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0cfb2e52-7371-4d38-994c-92b5b7d123cc-kube-api-access-qgf8m" (OuterVolumeSpecName: "kube-api-access-qgf8m") pod "0cfb2e52-7371-4d38-994c-92b5b7d123cc" (UID: "0cfb2e52-7371-4d38-994c-92b5b7d123cc"). InnerVolumeSpecName "kube-api-access-qgf8m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.231837 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d5f1383-42d7-47a1-9e47-8dba038241d2-kube-api-access-566b8" (OuterVolumeSpecName: "kube-api-access-566b8") pod "8d5f1383-42d7-47a1-9e47-8dba038241d2" (UID: "8d5f1383-42d7-47a1-9e47-8dba038241d2"). InnerVolumeSpecName "kube-api-access-566b8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.232561 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/074619b7-9220-4377-b93d-6088199a5e16-kube-api-access-pzp7l" (OuterVolumeSpecName: "kube-api-access-pzp7l") pod "074619b7-9220-4377-b93d-6088199a5e16" (UID: "074619b7-9220-4377-b93d-6088199a5e16"). InnerVolumeSpecName "kube-api-access-pzp7l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.273423 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8d5f1383-42d7-47a1-9e47-8dba038241d2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "8d5f1383-42d7-47a1-9e47-8dba038241d2" (UID: "8d5f1383-42d7-47a1-9e47-8dba038241d2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.293887 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/074619b7-9220-4377-b93d-6088199a5e16-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "074619b7-9220-4377-b93d-6088199a5e16" (UID: "074619b7-9220-4377-b93d-6088199a5e16"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.323443 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qgf8m\" (UniqueName: \"kubernetes.io/projected/0cfb2e52-7371-4d38-994c-92b5b7d123cc-kube-api-access-qgf8m\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.323466 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8d5f1383-42d7-47a1-9e47-8dba038241d2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.323476 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0cfb2e52-7371-4d38-994c-92b5b7d123cc-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.323486 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-566b8\" (UniqueName: \"kubernetes.io/projected/8d5f1383-42d7-47a1-9e47-8dba038241d2-kube-api-access-566b8\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.323494 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pzp7l\" (UniqueName: \"kubernetes.io/projected/074619b7-9220-4377-b93d-6088199a5e16-kube-api-access-pzp7l\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.323503 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8d5f1383-42d7-47a1-9e47-8dba038241d2-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.323510 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/074619b7-9220-4377-b93d-6088199a5e16-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.323518 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/074619b7-9220-4377-b93d-6088199a5e16-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.354566 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-jv94g"] Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.374887 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0cfb2e52-7371-4d38-994c-92b5b7d123cc-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0cfb2e52-7371-4d38-994c-92b5b7d123cc" (UID: "0cfb2e52-7371-4d38-994c-92b5b7d123cc"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.424533 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0cfb2e52-7371-4d38-994c-92b5b7d123cc-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.699296 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.699369 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-5r6kv" event={"ID":"8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea","Type":"ContainerDied","Data":"8d8b4a1528af48d18db181db8a7bebc79bb86f32aba8601a554e74b7bcaef05b"} Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.699435 4751 scope.go:117] "RemoveContainer" containerID="cc163d448fa8fad6b5ab0077c0960c4003a53c503f6d097090f206fed6245a22" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.705192 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-wcnsn" event={"ID":"074619b7-9220-4377-b93d-6088199a5e16","Type":"ContainerDied","Data":"092d3acc3e94a3dfd58bc12b9df82ef7950bf9b5a3e7871999c9c0efa3eb1c6d"} Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.705247 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-wcnsn" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.714304 4751 scope.go:117] "RemoveContainer" containerID="362555e9a4bda60e895d4cff8fad32fbdb6800b24c4a8d8deeb2ac026aebcc1b" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.716790 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-k2xfl" event={"ID":"e656c7af-fbd9-4e9c-ae61-d4142d37c89f","Type":"ContainerDied","Data":"ed378354261ea17a2d24e834a9aed8f1a45166375fb6ae1ce1dc38b9af3b5e0f"} Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.716812 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-k2xfl" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.719949 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-m4m6r" event={"ID":"8d5f1383-42d7-47a1-9e47-8dba038241d2","Type":"ContainerDied","Data":"cce74deb968262c3870a67f8d4e000b52815c6a74a72fbfe9270cef7ee6b23e7"} Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.719993 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-m4m6r" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.723121 4751 generic.go:334] "Generic (PLEG): container finished" podID="0cfb2e52-7371-4d38-994c-92b5b7d123cc" containerID="632dd7cf21c157e19b8b506aada8a2a0cc9ee7a4c7089d92374fc5dc9f67b1ea" exitCode=0 Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.723193 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gktqp" event={"ID":"0cfb2e52-7371-4d38-994c-92b5b7d123cc","Type":"ContainerDied","Data":"632dd7cf21c157e19b8b506aada8a2a0cc9ee7a4c7089d92374fc5dc9f67b1ea"} Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.723239 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-gktqp" event={"ID":"0cfb2e52-7371-4d38-994c-92b5b7d123cc","Type":"ContainerDied","Data":"6d8aa8d0e0300436346b38972033f042890b145471a99f8a553c2f56d280787e"} Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.723359 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-gktqp" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.728781 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" event={"ID":"9853dd16-26f9-4fe4-9468-52d39dd4dd1f","Type":"ContainerStarted","Data":"9fb26ef265cb69ab4af712c357f3693b010e56fd9a06a09ee5a8f9d24d9f4442"} Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.728816 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" event={"ID":"9853dd16-26f9-4fe4-9468-52d39dd4dd1f","Type":"ContainerStarted","Data":"c98f7e5064bdc2ff52e44a7c61caa0f113237629962f0d6aede141632e8b125e"} Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.736646 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5r6kv"] Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.742895 4751 scope.go:117] "RemoveContainer" containerID="a757fc9386532749c4b360530fb36362a62f17d343908433db3d64555171c0b9" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.749309 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-5r6kv"] Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.765110 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" podStartSLOduration=1.765093772 podStartE2EDuration="1.765093772s" podCreationTimestamp="2026-01-31 14:47:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:47:07.761282535 +0000 UTC m=+332.135995430" watchObservedRunningTime="2026-01-31 14:47:07.765093772 +0000 UTC m=+332.139806657" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.783338 4751 scope.go:117] "RemoveContainer" containerID="c0a252955873aa8b7cfdf7c617f1852f7e64f86f50411d0f5cc675309d6a71b6" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.791369 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2xfl"] Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.795745 4751 scope.go:117] "RemoveContainer" containerID="3bb7101aeb47dd5d5b9aa6ef1075e32a424c360c1ebaa7fd0787c20e4303f647" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.799042 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-k2xfl"] Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.804404 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-wcnsn"] Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.808677 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-wcnsn"] Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.812368 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-m4m6r"] Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.817082 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-m4m6r"] Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.822163 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-gktqp"] Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.823508 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-gktqp"] Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.824859 4751 scope.go:117] "RemoveContainer" containerID="874aebfb442c94d60aaad947db92520e6e5ff745ee226afefd00dd9dc85cb564" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.837987 4751 scope.go:117] "RemoveContainer" containerID="6c75c5ad4aa0723fec261497091fc30b60d95e73f9fe993ece85f3e477da66ef" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.850403 4751 scope.go:117] "RemoveContainer" containerID="eabca8f8fcdbfb2f04b488498b2a615e9946a5ba739f9fb75c570ef168f4bcd8" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.863494 4751 scope.go:117] "RemoveContainer" containerID="c0f53c12a6e17e599de6a624dae5a0ba532d7e88bc9baf9838475b082d03f347" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.877562 4751 scope.go:117] "RemoveContainer" containerID="e34fa377384a9a30f2361b80400e882c53155e0b5c8ad5f9beb3a5c178384ca0" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.887506 4751 scope.go:117] "RemoveContainer" containerID="632dd7cf21c157e19b8b506aada8a2a0cc9ee7a4c7089d92374fc5dc9f67b1ea" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.908553 4751 scope.go:117] "RemoveContainer" containerID="0fd8fddc66836c1f7f3a6139ad51fa7a751ab965677515d2805ae4e6c08a2765" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.925882 4751 scope.go:117] "RemoveContainer" containerID="aa50b668454ba4cf1d6033028034c77daf53f009e58a1184a7d22b857abf8b23" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.937023 4751 scope.go:117] "RemoveContainer" containerID="632dd7cf21c157e19b8b506aada8a2a0cc9ee7a4c7089d92374fc5dc9f67b1ea" Jan 31 14:47:07 crc kubenswrapper[4751]: E0131 14:47:07.937429 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"632dd7cf21c157e19b8b506aada8a2a0cc9ee7a4c7089d92374fc5dc9f67b1ea\": container with ID starting with 632dd7cf21c157e19b8b506aada8a2a0cc9ee7a4c7089d92374fc5dc9f67b1ea not found: ID does not exist" containerID="632dd7cf21c157e19b8b506aada8a2a0cc9ee7a4c7089d92374fc5dc9f67b1ea" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.937457 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"632dd7cf21c157e19b8b506aada8a2a0cc9ee7a4c7089d92374fc5dc9f67b1ea"} err="failed to get container status \"632dd7cf21c157e19b8b506aada8a2a0cc9ee7a4c7089d92374fc5dc9f67b1ea\": rpc error: code = NotFound desc = could not find container \"632dd7cf21c157e19b8b506aada8a2a0cc9ee7a4c7089d92374fc5dc9f67b1ea\": container with ID starting with 632dd7cf21c157e19b8b506aada8a2a0cc9ee7a4c7089d92374fc5dc9f67b1ea not found: ID does not exist" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.937478 4751 scope.go:117] "RemoveContainer" containerID="0fd8fddc66836c1f7f3a6139ad51fa7a751ab965677515d2805ae4e6c08a2765" Jan 31 14:47:07 crc kubenswrapper[4751]: E0131 14:47:07.937675 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0fd8fddc66836c1f7f3a6139ad51fa7a751ab965677515d2805ae4e6c08a2765\": container with ID starting with 0fd8fddc66836c1f7f3a6139ad51fa7a751ab965677515d2805ae4e6c08a2765 not found: ID does not exist" containerID="0fd8fddc66836c1f7f3a6139ad51fa7a751ab965677515d2805ae4e6c08a2765" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.937697 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0fd8fddc66836c1f7f3a6139ad51fa7a751ab965677515d2805ae4e6c08a2765"} err="failed to get container status \"0fd8fddc66836c1f7f3a6139ad51fa7a751ab965677515d2805ae4e6c08a2765\": rpc error: code = NotFound desc = could not find container \"0fd8fddc66836c1f7f3a6139ad51fa7a751ab965677515d2805ae4e6c08a2765\": container with ID starting with 0fd8fddc66836c1f7f3a6139ad51fa7a751ab965677515d2805ae4e6c08a2765 not found: ID does not exist" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.937710 4751 scope.go:117] "RemoveContainer" containerID="aa50b668454ba4cf1d6033028034c77daf53f009e58a1184a7d22b857abf8b23" Jan 31 14:47:07 crc kubenswrapper[4751]: E0131 14:47:07.937894 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"aa50b668454ba4cf1d6033028034c77daf53f009e58a1184a7d22b857abf8b23\": container with ID starting with aa50b668454ba4cf1d6033028034c77daf53f009e58a1184a7d22b857abf8b23 not found: ID does not exist" containerID="aa50b668454ba4cf1d6033028034c77daf53f009e58a1184a7d22b857abf8b23" Jan 31 14:47:07 crc kubenswrapper[4751]: I0131 14:47:07.937909 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"aa50b668454ba4cf1d6033028034c77daf53f009e58a1184a7d22b857abf8b23"} err="failed to get container status \"aa50b668454ba4cf1d6033028034c77daf53f009e58a1184a7d22b857abf8b23\": rpc error: code = NotFound desc = could not find container \"aa50b668454ba4cf1d6033028034c77daf53f009e58a1184a7d22b857abf8b23\": container with ID starting with aa50b668454ba4cf1d6033028034c77daf53f009e58a1184a7d22b857abf8b23 not found: ID does not exist" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.413763 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="074619b7-9220-4377-b93d-6088199a5e16" path="/var/lib/kubelet/pods/074619b7-9220-4377-b93d-6088199a5e16/volumes" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.414819 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0cfb2e52-7371-4d38-994c-92b5b7d123cc" path="/var/lib/kubelet/pods/0cfb2e52-7371-4d38-994c-92b5b7d123cc/volumes" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.415655 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea" path="/var/lib/kubelet/pods/8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea/volumes" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.416811 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d5f1383-42d7-47a1-9e47-8dba038241d2" path="/var/lib/kubelet/pods/8d5f1383-42d7-47a1-9e47-8dba038241d2/volumes" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.417594 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e656c7af-fbd9-4e9c-ae61-d4142d37c89f" path="/var/lib/kubelet/pods/e656c7af-fbd9-4e9c-ae61-d4142d37c89f/volumes" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.742649 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.744804 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-jv94g" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825389 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-22krg"] Jan 31 14:47:08 crc kubenswrapper[4751]: E0131 14:47:08.825555 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074619b7-9220-4377-b93d-6088199a5e16" containerName="registry-server" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825566 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="074619b7-9220-4377-b93d-6088199a5e16" containerName="registry-server" Jan 31 14:47:08 crc kubenswrapper[4751]: E0131 14:47:08.825574 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d5f1383-42d7-47a1-9e47-8dba038241d2" containerName="extract-content" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825580 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d5f1383-42d7-47a1-9e47-8dba038241d2" containerName="extract-content" Jan 31 14:47:08 crc kubenswrapper[4751]: E0131 14:47:08.825590 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d5f1383-42d7-47a1-9e47-8dba038241d2" containerName="registry-server" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825596 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d5f1383-42d7-47a1-9e47-8dba038241d2" containerName="registry-server" Jan 31 14:47:08 crc kubenswrapper[4751]: E0131 14:47:08.825604 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e656c7af-fbd9-4e9c-ae61-d4142d37c89f" containerName="extract-utilities" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825609 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e656c7af-fbd9-4e9c-ae61-d4142d37c89f" containerName="extract-utilities" Jan 31 14:47:08 crc kubenswrapper[4751]: E0131 14:47:08.825615 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074619b7-9220-4377-b93d-6088199a5e16" containerName="extract-content" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825621 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="074619b7-9220-4377-b93d-6088199a5e16" containerName="extract-content" Jan 31 14:47:08 crc kubenswrapper[4751]: E0131 14:47:08.825628 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e656c7af-fbd9-4e9c-ae61-d4142d37c89f" containerName="extract-content" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825634 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e656c7af-fbd9-4e9c-ae61-d4142d37c89f" containerName="extract-content" Jan 31 14:47:08 crc kubenswrapper[4751]: E0131 14:47:08.825642 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cfb2e52-7371-4d38-994c-92b5b7d123cc" containerName="registry-server" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825647 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cfb2e52-7371-4d38-994c-92b5b7d123cc" containerName="registry-server" Jan 31 14:47:08 crc kubenswrapper[4751]: E0131 14:47:08.825658 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cfb2e52-7371-4d38-994c-92b5b7d123cc" containerName="extract-content" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825664 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cfb2e52-7371-4d38-994c-92b5b7d123cc" containerName="extract-content" Jan 31 14:47:08 crc kubenswrapper[4751]: E0131 14:47:08.825673 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e656c7af-fbd9-4e9c-ae61-d4142d37c89f" containerName="registry-server" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825678 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e656c7af-fbd9-4e9c-ae61-d4142d37c89f" containerName="registry-server" Jan 31 14:47:08 crc kubenswrapper[4751]: E0131 14:47:08.825688 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0cfb2e52-7371-4d38-994c-92b5b7d123cc" containerName="extract-utilities" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825693 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0cfb2e52-7371-4d38-994c-92b5b7d123cc" containerName="extract-utilities" Jan 31 14:47:08 crc kubenswrapper[4751]: E0131 14:47:08.825704 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="074619b7-9220-4377-b93d-6088199a5e16" containerName="extract-utilities" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825710 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="074619b7-9220-4377-b93d-6088199a5e16" containerName="extract-utilities" Jan 31 14:47:08 crc kubenswrapper[4751]: E0131 14:47:08.825718 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8d5f1383-42d7-47a1-9e47-8dba038241d2" containerName="extract-utilities" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825723 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d5f1383-42d7-47a1-9e47-8dba038241d2" containerName="extract-utilities" Jan 31 14:47:08 crc kubenswrapper[4751]: E0131 14:47:08.825731 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea" containerName="marketplace-operator" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825736 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea" containerName="marketplace-operator" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825809 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e656c7af-fbd9-4e9c-ae61-d4142d37c89f" containerName="registry-server" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825820 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="074619b7-9220-4377-b93d-6088199a5e16" containerName="registry-server" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825827 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0cfb2e52-7371-4d38-994c-92b5b7d123cc" containerName="registry-server" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825837 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8bc2bde1-a50e-47bb-8211-4b5ed0ac74ea" containerName="marketplace-operator" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.825845 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8d5f1383-42d7-47a1-9e47-8dba038241d2" containerName="registry-server" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.826462 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-22krg" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.830319 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.841392 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-22krg"] Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.945089 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dppkm\" (UniqueName: \"kubernetes.io/projected/affc293d-ac4e-49ad-be4a-bc13d7c056a7-kube-api-access-dppkm\") pod \"redhat-marketplace-22krg\" (UID: \"affc293d-ac4e-49ad-be4a-bc13d7c056a7\") " pod="openshift-marketplace/redhat-marketplace-22krg" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.945139 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affc293d-ac4e-49ad-be4a-bc13d7c056a7-utilities\") pod \"redhat-marketplace-22krg\" (UID: \"affc293d-ac4e-49ad-be4a-bc13d7c056a7\") " pod="openshift-marketplace/redhat-marketplace-22krg" Jan 31 14:47:08 crc kubenswrapper[4751]: I0131 14:47:08.945260 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affc293d-ac4e-49ad-be4a-bc13d7c056a7-catalog-content\") pod \"redhat-marketplace-22krg\" (UID: \"affc293d-ac4e-49ad-be4a-bc13d7c056a7\") " pod="openshift-marketplace/redhat-marketplace-22krg" Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.046243 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dppkm\" (UniqueName: \"kubernetes.io/projected/affc293d-ac4e-49ad-be4a-bc13d7c056a7-kube-api-access-dppkm\") pod \"redhat-marketplace-22krg\" (UID: \"affc293d-ac4e-49ad-be4a-bc13d7c056a7\") " pod="openshift-marketplace/redhat-marketplace-22krg" Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.046301 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affc293d-ac4e-49ad-be4a-bc13d7c056a7-utilities\") pod \"redhat-marketplace-22krg\" (UID: \"affc293d-ac4e-49ad-be4a-bc13d7c056a7\") " pod="openshift-marketplace/redhat-marketplace-22krg" Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.046361 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affc293d-ac4e-49ad-be4a-bc13d7c056a7-catalog-content\") pod \"redhat-marketplace-22krg\" (UID: \"affc293d-ac4e-49ad-be4a-bc13d7c056a7\") " pod="openshift-marketplace/redhat-marketplace-22krg" Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.046839 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/affc293d-ac4e-49ad-be4a-bc13d7c056a7-catalog-content\") pod \"redhat-marketplace-22krg\" (UID: \"affc293d-ac4e-49ad-be4a-bc13d7c056a7\") " pod="openshift-marketplace/redhat-marketplace-22krg" Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.047030 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/affc293d-ac4e-49ad-be4a-bc13d7c056a7-utilities\") pod \"redhat-marketplace-22krg\" (UID: \"affc293d-ac4e-49ad-be4a-bc13d7c056a7\") " pod="openshift-marketplace/redhat-marketplace-22krg" Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.079016 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dppkm\" (UniqueName: \"kubernetes.io/projected/affc293d-ac4e-49ad-be4a-bc13d7c056a7-kube-api-access-dppkm\") pod \"redhat-marketplace-22krg\" (UID: \"affc293d-ac4e-49ad-be4a-bc13d7c056a7\") " pod="openshift-marketplace/redhat-marketplace-22krg" Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.142099 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-22krg" Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.594459 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-22krg"] Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.747800 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22krg" event={"ID":"affc293d-ac4e-49ad-be4a-bc13d7c056a7","Type":"ContainerStarted","Data":"e2924e6df1ae2df930cc380ff4f034e1e01fc092d1b0cac883bd6d09d6d43e8e"} Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.828708 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-678m7"] Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.829942 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-678m7" Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.833662 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.839431 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-678m7"] Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.959017 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43fbbbf2-c128-46a4-9cc3-99e46c617027-utilities\") pod \"redhat-operators-678m7\" (UID: \"43fbbbf2-c128-46a4-9cc3-99e46c617027\") " pod="openshift-marketplace/redhat-operators-678m7" Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.959185 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43fbbbf2-c128-46a4-9cc3-99e46c617027-catalog-content\") pod \"redhat-operators-678m7\" (UID: \"43fbbbf2-c128-46a4-9cc3-99e46c617027\") " pod="openshift-marketplace/redhat-operators-678m7" Jan 31 14:47:09 crc kubenswrapper[4751]: I0131 14:47:09.959244 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcznw\" (UniqueName: \"kubernetes.io/projected/43fbbbf2-c128-46a4-9cc3-99e46c617027-kube-api-access-wcznw\") pod \"redhat-operators-678m7\" (UID: \"43fbbbf2-c128-46a4-9cc3-99e46c617027\") " pod="openshift-marketplace/redhat-operators-678m7" Jan 31 14:47:10 crc kubenswrapper[4751]: I0131 14:47:10.060871 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43fbbbf2-c128-46a4-9cc3-99e46c617027-utilities\") pod \"redhat-operators-678m7\" (UID: \"43fbbbf2-c128-46a4-9cc3-99e46c617027\") " pod="openshift-marketplace/redhat-operators-678m7" Jan 31 14:47:10 crc kubenswrapper[4751]: I0131 14:47:10.060940 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43fbbbf2-c128-46a4-9cc3-99e46c617027-catalog-content\") pod \"redhat-operators-678m7\" (UID: \"43fbbbf2-c128-46a4-9cc3-99e46c617027\") " pod="openshift-marketplace/redhat-operators-678m7" Jan 31 14:47:10 crc kubenswrapper[4751]: I0131 14:47:10.060994 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wcznw\" (UniqueName: \"kubernetes.io/projected/43fbbbf2-c128-46a4-9cc3-99e46c617027-kube-api-access-wcznw\") pod \"redhat-operators-678m7\" (UID: \"43fbbbf2-c128-46a4-9cc3-99e46c617027\") " pod="openshift-marketplace/redhat-operators-678m7" Jan 31 14:47:10 crc kubenswrapper[4751]: I0131 14:47:10.061778 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/43fbbbf2-c128-46a4-9cc3-99e46c617027-utilities\") pod \"redhat-operators-678m7\" (UID: \"43fbbbf2-c128-46a4-9cc3-99e46c617027\") " pod="openshift-marketplace/redhat-operators-678m7" Jan 31 14:47:10 crc kubenswrapper[4751]: I0131 14:47:10.061984 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/43fbbbf2-c128-46a4-9cc3-99e46c617027-catalog-content\") pod \"redhat-operators-678m7\" (UID: \"43fbbbf2-c128-46a4-9cc3-99e46c617027\") " pod="openshift-marketplace/redhat-operators-678m7" Jan 31 14:47:10 crc kubenswrapper[4751]: I0131 14:47:10.079293 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wcznw\" (UniqueName: \"kubernetes.io/projected/43fbbbf2-c128-46a4-9cc3-99e46c617027-kube-api-access-wcznw\") pod \"redhat-operators-678m7\" (UID: \"43fbbbf2-c128-46a4-9cc3-99e46c617027\") " pod="openshift-marketplace/redhat-operators-678m7" Jan 31 14:47:10 crc kubenswrapper[4751]: I0131 14:47:10.161267 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-678m7" Jan 31 14:47:10 crc kubenswrapper[4751]: I0131 14:47:10.559996 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-678m7"] Jan 31 14:47:10 crc kubenswrapper[4751]: W0131 14:47:10.567120 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod43fbbbf2_c128_46a4_9cc3_99e46c617027.slice/crio-d06bd7bb965534d3f4081f66291469dfeeba3fc44f0ad41ce4151ef674a66b32 WatchSource:0}: Error finding container d06bd7bb965534d3f4081f66291469dfeeba3fc44f0ad41ce4151ef674a66b32: Status 404 returned error can't find the container with id d06bd7bb965534d3f4081f66291469dfeeba3fc44f0ad41ce4151ef674a66b32 Jan 31 14:47:10 crc kubenswrapper[4751]: I0131 14:47:10.754794 4751 generic.go:334] "Generic (PLEG): container finished" podID="affc293d-ac4e-49ad-be4a-bc13d7c056a7" containerID="ef0b6ffeed9764097de7924c9d5800599c1fcf813b5b2868a854c3e83b3eddeb" exitCode=0 Jan 31 14:47:10 crc kubenswrapper[4751]: I0131 14:47:10.754870 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22krg" event={"ID":"affc293d-ac4e-49ad-be4a-bc13d7c056a7","Type":"ContainerDied","Data":"ef0b6ffeed9764097de7924c9d5800599c1fcf813b5b2868a854c3e83b3eddeb"} Jan 31 14:47:10 crc kubenswrapper[4751]: I0131 14:47:10.759756 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-678m7" event={"ID":"43fbbbf2-c128-46a4-9cc3-99e46c617027","Type":"ContainerStarted","Data":"f2cafee39d3e2d35eefedf868fc04160baf4ea42c8c5028c544445c72eedb2be"} Jan 31 14:47:10 crc kubenswrapper[4751]: I0131 14:47:10.759800 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-678m7" event={"ID":"43fbbbf2-c128-46a4-9cc3-99e46c617027","Type":"ContainerStarted","Data":"d06bd7bb965534d3f4081f66291469dfeeba3fc44f0ad41ce4151ef674a66b32"} Jan 31 14:47:11 crc kubenswrapper[4751]: I0131 14:47:11.229381 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-qcs7h"] Jan 31 14:47:11 crc kubenswrapper[4751]: I0131 14:47:11.231256 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcs7h" Jan 31 14:47:11 crc kubenswrapper[4751]: I0131 14:47:11.234397 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 31 14:47:11 crc kubenswrapper[4751]: I0131 14:47:11.235965 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qcs7h"] Jan 31 14:47:11 crc kubenswrapper[4751]: I0131 14:47:11.275938 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c83f0a10-f56b-4795-93b9-ee224d439648-utilities\") pod \"certified-operators-qcs7h\" (UID: \"c83f0a10-f56b-4795-93b9-ee224d439648\") " pod="openshift-marketplace/certified-operators-qcs7h" Jan 31 14:47:11 crc kubenswrapper[4751]: I0131 14:47:11.276007 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c83f0a10-f56b-4795-93b9-ee224d439648-catalog-content\") pod \"certified-operators-qcs7h\" (UID: \"c83f0a10-f56b-4795-93b9-ee224d439648\") " pod="openshift-marketplace/certified-operators-qcs7h" Jan 31 14:47:11 crc kubenswrapper[4751]: I0131 14:47:11.276034 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w72wf\" (UniqueName: \"kubernetes.io/projected/c83f0a10-f56b-4795-93b9-ee224d439648-kube-api-access-w72wf\") pod \"certified-operators-qcs7h\" (UID: \"c83f0a10-f56b-4795-93b9-ee224d439648\") " pod="openshift-marketplace/certified-operators-qcs7h" Jan 31 14:47:11 crc kubenswrapper[4751]: I0131 14:47:11.378036 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c83f0a10-f56b-4795-93b9-ee224d439648-utilities\") pod \"certified-operators-qcs7h\" (UID: \"c83f0a10-f56b-4795-93b9-ee224d439648\") " pod="openshift-marketplace/certified-operators-qcs7h" Jan 31 14:47:11 crc kubenswrapper[4751]: I0131 14:47:11.378538 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c83f0a10-f56b-4795-93b9-ee224d439648-catalog-content\") pod \"certified-operators-qcs7h\" (UID: \"c83f0a10-f56b-4795-93b9-ee224d439648\") " pod="openshift-marketplace/certified-operators-qcs7h" Jan 31 14:47:11 crc kubenswrapper[4751]: I0131 14:47:11.378697 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w72wf\" (UniqueName: \"kubernetes.io/projected/c83f0a10-f56b-4795-93b9-ee224d439648-kube-api-access-w72wf\") pod \"certified-operators-qcs7h\" (UID: \"c83f0a10-f56b-4795-93b9-ee224d439648\") " pod="openshift-marketplace/certified-operators-qcs7h" Jan 31 14:47:11 crc kubenswrapper[4751]: I0131 14:47:11.378794 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c83f0a10-f56b-4795-93b9-ee224d439648-utilities\") pod \"certified-operators-qcs7h\" (UID: \"c83f0a10-f56b-4795-93b9-ee224d439648\") " pod="openshift-marketplace/certified-operators-qcs7h" Jan 31 14:47:11 crc kubenswrapper[4751]: I0131 14:47:11.379138 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c83f0a10-f56b-4795-93b9-ee224d439648-catalog-content\") pod \"certified-operators-qcs7h\" (UID: \"c83f0a10-f56b-4795-93b9-ee224d439648\") " pod="openshift-marketplace/certified-operators-qcs7h" Jan 31 14:47:11 crc kubenswrapper[4751]: I0131 14:47:11.866328 4751 generic.go:334] "Generic (PLEG): container finished" podID="43fbbbf2-c128-46a4-9cc3-99e46c617027" containerID="f2cafee39d3e2d35eefedf868fc04160baf4ea42c8c5028c544445c72eedb2be" exitCode=0 Jan 31 14:47:11 crc kubenswrapper[4751]: I0131 14:47:11.866546 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-678m7" event={"ID":"43fbbbf2-c128-46a4-9cc3-99e46c617027","Type":"ContainerDied","Data":"f2cafee39d3e2d35eefedf868fc04160baf4ea42c8c5028c544445c72eedb2be"} Jan 31 14:47:11 crc kubenswrapper[4751]: I0131 14:47:11.871039 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w72wf\" (UniqueName: \"kubernetes.io/projected/c83f0a10-f56b-4795-93b9-ee224d439648-kube-api-access-w72wf\") pod \"certified-operators-qcs7h\" (UID: \"c83f0a10-f56b-4795-93b9-ee224d439648\") " pod="openshift-marketplace/certified-operators-qcs7h" Jan 31 14:47:12 crc kubenswrapper[4751]: I0131 14:47:12.159923 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-qcs7h" Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.428167 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-gr5gf"] Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.429466 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gr5gf" Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.431778 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gr5gf"] Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.432419 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.495200 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdm52\" (UniqueName: \"kubernetes.io/projected/2eb5e3aa-17fa-49a0-a422-bc69a8a410fb-kube-api-access-fdm52\") pod \"community-operators-gr5gf\" (UID: \"2eb5e3aa-17fa-49a0-a422-bc69a8a410fb\") " pod="openshift-marketplace/community-operators-gr5gf" Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.495254 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eb5e3aa-17fa-49a0-a422-bc69a8a410fb-utilities\") pod \"community-operators-gr5gf\" (UID: \"2eb5e3aa-17fa-49a0-a422-bc69a8a410fb\") " pod="openshift-marketplace/community-operators-gr5gf" Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.495294 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eb5e3aa-17fa-49a0-a422-bc69a8a410fb-catalog-content\") pod \"community-operators-gr5gf\" (UID: \"2eb5e3aa-17fa-49a0-a422-bc69a8a410fb\") " pod="openshift-marketplace/community-operators-gr5gf" Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.593588 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-qcs7h"] Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.596680 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdm52\" (UniqueName: \"kubernetes.io/projected/2eb5e3aa-17fa-49a0-a422-bc69a8a410fb-kube-api-access-fdm52\") pod \"community-operators-gr5gf\" (UID: \"2eb5e3aa-17fa-49a0-a422-bc69a8a410fb\") " pod="openshift-marketplace/community-operators-gr5gf" Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.596715 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eb5e3aa-17fa-49a0-a422-bc69a8a410fb-utilities\") pod \"community-operators-gr5gf\" (UID: \"2eb5e3aa-17fa-49a0-a422-bc69a8a410fb\") " pod="openshift-marketplace/community-operators-gr5gf" Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.596747 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eb5e3aa-17fa-49a0-a422-bc69a8a410fb-catalog-content\") pod \"community-operators-gr5gf\" (UID: \"2eb5e3aa-17fa-49a0-a422-bc69a8a410fb\") " pod="openshift-marketplace/community-operators-gr5gf" Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.597168 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2eb5e3aa-17fa-49a0-a422-bc69a8a410fb-catalog-content\") pod \"community-operators-gr5gf\" (UID: \"2eb5e3aa-17fa-49a0-a422-bc69a8a410fb\") " pod="openshift-marketplace/community-operators-gr5gf" Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.597607 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2eb5e3aa-17fa-49a0-a422-bc69a8a410fb-utilities\") pod \"community-operators-gr5gf\" (UID: \"2eb5e3aa-17fa-49a0-a422-bc69a8a410fb\") " pod="openshift-marketplace/community-operators-gr5gf" Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.621093 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdm52\" (UniqueName: \"kubernetes.io/projected/2eb5e3aa-17fa-49a0-a422-bc69a8a410fb-kube-api-access-fdm52\") pod \"community-operators-gr5gf\" (UID: \"2eb5e3aa-17fa-49a0-a422-bc69a8a410fb\") " pod="openshift-marketplace/community-operators-gr5gf" Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.745944 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-gr5gf" Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.879408 4751 generic.go:334] "Generic (PLEG): container finished" podID="affc293d-ac4e-49ad-be4a-bc13d7c056a7" containerID="2a4c0e6fdb547d7da161d6df9f6a153b6a313115facfb8588f53546344a84b83" exitCode=0 Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.879482 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22krg" event={"ID":"affc293d-ac4e-49ad-be4a-bc13d7c056a7","Type":"ContainerDied","Data":"2a4c0e6fdb547d7da161d6df9f6a153b6a313115facfb8588f53546344a84b83"} Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.882284 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-678m7" event={"ID":"43fbbbf2-c128-46a4-9cc3-99e46c617027","Type":"ContainerStarted","Data":"cbc01a65e88cb04d479e3fbee6b56cd23c306d0d404314544d61604afee6ce91"} Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.885504 4751 generic.go:334] "Generic (PLEG): container finished" podID="c83f0a10-f56b-4795-93b9-ee224d439648" containerID="f0c7d210f4905c5ebd63ad1688d4962a22611a0650fc973151439c47e20f365f" exitCode=0 Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.886240 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcs7h" event={"ID":"c83f0a10-f56b-4795-93b9-ee224d439648","Type":"ContainerDied","Data":"f0c7d210f4905c5ebd63ad1688d4962a22611a0650fc973151439c47e20f365f"} Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:12.886368 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcs7h" event={"ID":"c83f0a10-f56b-4795-93b9-ee224d439648","Type":"ContainerStarted","Data":"43ba868ae0b26e8ee6fc70a86bc9fb9f499781411409ffef9af2e7dd09c6176a"} Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:13.446807 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-gr5gf"] Jan 31 14:47:13 crc kubenswrapper[4751]: W0131 14:47:13.454940 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2eb5e3aa_17fa_49a0_a422_bc69a8a410fb.slice/crio-a9230f306dbb7839004de374b90ecaee07288759ea0cfb21958fdb7f3d0a7a06 WatchSource:0}: Error finding container a9230f306dbb7839004de374b90ecaee07288759ea0cfb21958fdb7f3d0a7a06: Status 404 returned error can't find the container with id a9230f306dbb7839004de374b90ecaee07288759ea0cfb21958fdb7f3d0a7a06 Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:13.892505 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcs7h" event={"ID":"c83f0a10-f56b-4795-93b9-ee224d439648","Type":"ContainerStarted","Data":"0e1f4b1ea2d1f4c691c80005d0d4b88eefb40ca9917b8e2e9866ba3f5c04a5c5"} Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:13.893738 4751 generic.go:334] "Generic (PLEG): container finished" podID="2eb5e3aa-17fa-49a0-a422-bc69a8a410fb" containerID="c4234f74dd13343823cab275e87af0dec8660d77f7c5674d07ed63ca0ba425fa" exitCode=0 Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:13.893795 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr5gf" event={"ID":"2eb5e3aa-17fa-49a0-a422-bc69a8a410fb","Type":"ContainerDied","Data":"c4234f74dd13343823cab275e87af0dec8660d77f7c5674d07ed63ca0ba425fa"} Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:13.893830 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr5gf" event={"ID":"2eb5e3aa-17fa-49a0-a422-bc69a8a410fb","Type":"ContainerStarted","Data":"a9230f306dbb7839004de374b90ecaee07288759ea0cfb21958fdb7f3d0a7a06"} Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:13.896387 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-22krg" event={"ID":"affc293d-ac4e-49ad-be4a-bc13d7c056a7","Type":"ContainerStarted","Data":"c665d5714ce1ec2e2819ec5b82889b2b386aa67d7901b6a62dbf023eefbc4de3"} Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:13.897998 4751 generic.go:334] "Generic (PLEG): container finished" podID="43fbbbf2-c128-46a4-9cc3-99e46c617027" containerID="cbc01a65e88cb04d479e3fbee6b56cd23c306d0d404314544d61604afee6ce91" exitCode=0 Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:13.898022 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-678m7" event={"ID":"43fbbbf2-c128-46a4-9cc3-99e46c617027","Type":"ContainerDied","Data":"cbc01a65e88cb04d479e3fbee6b56cd23c306d0d404314544d61604afee6ce91"} Jan 31 14:47:13 crc kubenswrapper[4751]: I0131 14:47:13.938834 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-22krg" podStartSLOduration=3.372518422 podStartE2EDuration="5.938814955s" podCreationTimestamp="2026-01-31 14:47:08 +0000 UTC" firstStartedPulling="2026-01-31 14:47:10.757200016 +0000 UTC m=+335.131912911" lastFinishedPulling="2026-01-31 14:47:13.323496559 +0000 UTC m=+337.698209444" observedRunningTime="2026-01-31 14:47:13.935813921 +0000 UTC m=+338.310526806" watchObservedRunningTime="2026-01-31 14:47:13.938814955 +0000 UTC m=+338.313527840" Jan 31 14:47:14 crc kubenswrapper[4751]: I0131 14:47:14.908451 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-678m7" event={"ID":"43fbbbf2-c128-46a4-9cc3-99e46c617027","Type":"ContainerStarted","Data":"cbc93651e15fe9b1b53de7dc02f8cb49804246c847642a9031bf67df3f58a6d8"} Jan 31 14:47:14 crc kubenswrapper[4751]: I0131 14:47:14.914172 4751 generic.go:334] "Generic (PLEG): container finished" podID="c83f0a10-f56b-4795-93b9-ee224d439648" containerID="0e1f4b1ea2d1f4c691c80005d0d4b88eefb40ca9917b8e2e9866ba3f5c04a5c5" exitCode=0 Jan 31 14:47:14 crc kubenswrapper[4751]: I0131 14:47:14.915350 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcs7h" event={"ID":"c83f0a10-f56b-4795-93b9-ee224d439648","Type":"ContainerDied","Data":"0e1f4b1ea2d1f4c691c80005d0d4b88eefb40ca9917b8e2e9866ba3f5c04a5c5"} Jan 31 14:47:14 crc kubenswrapper[4751]: I0131 14:47:14.928038 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-678m7" podStartSLOduration=3.517110504 podStartE2EDuration="5.928021128s" podCreationTimestamp="2026-01-31 14:47:09 +0000 UTC" firstStartedPulling="2026-01-31 14:47:11.868354389 +0000 UTC m=+336.243067274" lastFinishedPulling="2026-01-31 14:47:14.279265013 +0000 UTC m=+338.653977898" observedRunningTime="2026-01-31 14:47:14.927605776 +0000 UTC m=+339.302318691" watchObservedRunningTime="2026-01-31 14:47:14.928021128 +0000 UTC m=+339.302734003" Jan 31 14:47:15 crc kubenswrapper[4751]: I0131 14:47:15.926840 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-qcs7h" event={"ID":"c83f0a10-f56b-4795-93b9-ee224d439648","Type":"ContainerStarted","Data":"205e2afe79bb15a6fb42be9a5245809e70944c85ed4ba914f8281a5585cee3a0"} Jan 31 14:47:15 crc kubenswrapper[4751]: I0131 14:47:15.929500 4751 generic.go:334] "Generic (PLEG): container finished" podID="2eb5e3aa-17fa-49a0-a422-bc69a8a410fb" containerID="f4c859cc4863edeb57864d4b70863a93d3378c06e1797e924a9ef9213bda12a3" exitCode=0 Jan 31 14:47:15 crc kubenswrapper[4751]: I0131 14:47:15.930515 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr5gf" event={"ID":"2eb5e3aa-17fa-49a0-a422-bc69a8a410fb","Type":"ContainerDied","Data":"f4c859cc4863edeb57864d4b70863a93d3378c06e1797e924a9ef9213bda12a3"} Jan 31 14:47:15 crc kubenswrapper[4751]: I0131 14:47:15.946168 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-qcs7h" podStartSLOduration=2.428422992 podStartE2EDuration="4.946155062s" podCreationTimestamp="2026-01-31 14:47:11 +0000 UTC" firstStartedPulling="2026-01-31 14:47:12.888286154 +0000 UTC m=+337.262999039" lastFinishedPulling="2026-01-31 14:47:15.406018224 +0000 UTC m=+339.780731109" observedRunningTime="2026-01-31 14:47:15.941036359 +0000 UTC m=+340.315749234" watchObservedRunningTime="2026-01-31 14:47:15.946155062 +0000 UTC m=+340.320867947" Jan 31 14:47:17 crc kubenswrapper[4751]: I0131 14:47:17.941099 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-gr5gf" event={"ID":"2eb5e3aa-17fa-49a0-a422-bc69a8a410fb","Type":"ContainerStarted","Data":"a295af802189ad1afcb88d928212ac435dbd647d08b6f500b14457174599fe98"} Jan 31 14:47:17 crc kubenswrapper[4751]: I0131 14:47:17.960893 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-gr5gf" podStartSLOduration=3.533130167 podStartE2EDuration="5.960877685s" podCreationTimestamp="2026-01-31 14:47:12 +0000 UTC" firstStartedPulling="2026-01-31 14:47:13.895308125 +0000 UTC m=+338.270021010" lastFinishedPulling="2026-01-31 14:47:16.323055643 +0000 UTC m=+340.697768528" observedRunningTime="2026-01-31 14:47:17.959008603 +0000 UTC m=+342.333721488" watchObservedRunningTime="2026-01-31 14:47:17.960877685 +0000 UTC m=+342.335590580" Jan 31 14:47:19 crc kubenswrapper[4751]: I0131 14:47:19.143577 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-22krg" Jan 31 14:47:19 crc kubenswrapper[4751]: I0131 14:47:19.143839 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-22krg" Jan 31 14:47:19 crc kubenswrapper[4751]: I0131 14:47:19.215801 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-22krg" Jan 31 14:47:19 crc kubenswrapper[4751]: I0131 14:47:19.997693 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-22krg" Jan 31 14:47:20 crc kubenswrapper[4751]: I0131 14:47:20.161757 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-678m7" Jan 31 14:47:20 crc kubenswrapper[4751]: I0131 14:47:20.162047 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-678m7" Jan 31 14:47:21 crc kubenswrapper[4751]: I0131 14:47:21.198781 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-678m7" podUID="43fbbbf2-c128-46a4-9cc3-99e46c617027" containerName="registry-server" probeResult="failure" output=< Jan 31 14:47:21 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 31 14:47:21 crc kubenswrapper[4751]: > Jan 31 14:47:22 crc kubenswrapper[4751]: I0131 14:47:22.159999 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-qcs7h" Jan 31 14:47:22 crc kubenswrapper[4751]: I0131 14:47:22.160336 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-qcs7h" Jan 31 14:47:22 crc kubenswrapper[4751]: I0131 14:47:22.219765 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-qcs7h" Jan 31 14:47:22 crc kubenswrapper[4751]: I0131 14:47:22.746401 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-gr5gf" Jan 31 14:47:22 crc kubenswrapper[4751]: I0131 14:47:22.746474 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-gr5gf" Jan 31 14:47:22 crc kubenswrapper[4751]: I0131 14:47:22.788942 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-gr5gf" Jan 31 14:47:23 crc kubenswrapper[4751]: I0131 14:47:23.016637 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-qcs7h" Jan 31 14:47:23 crc kubenswrapper[4751]: I0131 14:47:23.022414 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-gr5gf" Jan 31 14:47:24 crc kubenswrapper[4751]: I0131 14:47:24.148577 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-5fdjn" Jan 31 14:47:24 crc kubenswrapper[4751]: I0131 14:47:24.211476 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mpbgx"] Jan 31 14:47:30 crc kubenswrapper[4751]: I0131 14:47:30.214001 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-678m7" Jan 31 14:47:30 crc kubenswrapper[4751]: I0131 14:47:30.258301 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-678m7" Jan 31 14:47:38 crc kubenswrapper[4751]: I0131 14:47:38.897021 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:47:38 crc kubenswrapper[4751]: I0131 14:47:38.897800 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:47:49 crc kubenswrapper[4751]: I0131 14:47:49.264023 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" podUID="4e18e163-6cf0-48ef-9a6f-90cbece870b0" containerName="registry" containerID="cri-o://4a4776950d27c1d1245ca6dd71fb7012b30d42bb2d21525539ad27b3f377c032" gracePeriod=30 Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.150159 4751 generic.go:334] "Generic (PLEG): container finished" podID="4e18e163-6cf0-48ef-9a6f-90cbece870b0" containerID="4a4776950d27c1d1245ca6dd71fb7012b30d42bb2d21525539ad27b3f377c032" exitCode=0 Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.150211 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" event={"ID":"4e18e163-6cf0-48ef-9a6f-90cbece870b0","Type":"ContainerDied","Data":"4a4776950d27c1d1245ca6dd71fb7012b30d42bb2d21525539ad27b3f377c032"} Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.269461 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.339297 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4e18e163-6cf0-48ef-9a6f-90cbece870b0-installation-pull-secrets\") pod \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.339614 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.339663 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-bound-sa-token\") pod \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.339729 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4e18e163-6cf0-48ef-9a6f-90cbece870b0-ca-trust-extracted\") pod \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.339762 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-registry-tls\") pod \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.339804 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e18e163-6cf0-48ef-9a6f-90cbece870b0-trusted-ca\") pod \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.339837 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4e18e163-6cf0-48ef-9a6f-90cbece870b0-registry-certificates\") pod \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.339874 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llx87\" (UniqueName: \"kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-kube-api-access-llx87\") pod \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\" (UID: \"4e18e163-6cf0-48ef-9a6f-90cbece870b0\") " Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.342386 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e18e163-6cf0-48ef-9a6f-90cbece870b0-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "4e18e163-6cf0-48ef-9a6f-90cbece870b0" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.342579 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4e18e163-6cf0-48ef-9a6f-90cbece870b0-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "4e18e163-6cf0-48ef-9a6f-90cbece870b0" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.346405 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "4e18e163-6cf0-48ef-9a6f-90cbece870b0" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.349125 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "4e18e163-6cf0-48ef-9a6f-90cbece870b0" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.349408 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4e18e163-6cf0-48ef-9a6f-90cbece870b0-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "4e18e163-6cf0-48ef-9a6f-90cbece870b0" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.350043 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-kube-api-access-llx87" (OuterVolumeSpecName: "kube-api-access-llx87") pod "4e18e163-6cf0-48ef-9a6f-90cbece870b0" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0"). InnerVolumeSpecName "kube-api-access-llx87". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.354018 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "4e18e163-6cf0-48ef-9a6f-90cbece870b0" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.357812 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4e18e163-6cf0-48ef-9a6f-90cbece870b0-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "4e18e163-6cf0-48ef-9a6f-90cbece870b0" (UID: "4e18e163-6cf0-48ef-9a6f-90cbece870b0"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.440529 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llx87\" (UniqueName: \"kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-kube-api-access-llx87\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.440563 4751 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/4e18e163-6cf0-48ef-9a6f-90cbece870b0-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.440576 4751 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.440587 4751 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/4e18e163-6cf0-48ef-9a6f-90cbece870b0-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.440599 4751 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/4e18e163-6cf0-48ef-9a6f-90cbece870b0-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.440611 4751 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/4e18e163-6cf0-48ef-9a6f-90cbece870b0-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:50 crc kubenswrapper[4751]: I0131 14:47:50.440621 4751 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/4e18e163-6cf0-48ef-9a6f-90cbece870b0-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 31 14:47:50 crc kubenswrapper[4751]: E0131 14:47:50.531187 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod4e18e163_6cf0_48ef_9a6f_90cbece870b0.slice\": RecentStats: unable to find data in memory cache]" Jan 31 14:47:51 crc kubenswrapper[4751]: I0131 14:47:51.158661 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" event={"ID":"4e18e163-6cf0-48ef-9a6f-90cbece870b0","Type":"ContainerDied","Data":"f189ebd73b2de2ffc6329477d3690421c7e4c89608c81de50df6ebb8b9b1c5e0"} Jan 31 14:47:51 crc kubenswrapper[4751]: I0131 14:47:51.158748 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-mpbgx" Jan 31 14:47:51 crc kubenswrapper[4751]: I0131 14:47:51.158758 4751 scope.go:117] "RemoveContainer" containerID="4a4776950d27c1d1245ca6dd71fb7012b30d42bb2d21525539ad27b3f377c032" Jan 31 14:47:51 crc kubenswrapper[4751]: I0131 14:47:51.204354 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mpbgx"] Jan 31 14:47:51 crc kubenswrapper[4751]: I0131 14:47:51.211768 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-mpbgx"] Jan 31 14:47:52 crc kubenswrapper[4751]: I0131 14:47:52.417728 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4e18e163-6cf0-48ef-9a6f-90cbece870b0" path="/var/lib/kubelet/pods/4e18e163-6cf0-48ef-9a6f-90cbece870b0/volumes" Jan 31 14:48:08 crc kubenswrapper[4751]: I0131 14:48:08.897339 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:48:08 crc kubenswrapper[4751]: I0131 14:48:08.898138 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:48:38 crc kubenswrapper[4751]: I0131 14:48:38.896880 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:48:38 crc kubenswrapper[4751]: I0131 14:48:38.897547 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:48:38 crc kubenswrapper[4751]: I0131 14:48:38.897613 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:48:38 crc kubenswrapper[4751]: I0131 14:48:38.898984 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"45cb0d3a062f00471c149bf8e8ee7eaef0df67968aef3870677e63ed898aa00d"} pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 14:48:38 crc kubenswrapper[4751]: I0131 14:48:38.899179 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" containerID="cri-o://45cb0d3a062f00471c149bf8e8ee7eaef0df67968aef3870677e63ed898aa00d" gracePeriod=600 Jan 31 14:48:39 crc kubenswrapper[4751]: I0131 14:48:39.511889 4751 generic.go:334] "Generic (PLEG): container finished" podID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerID="45cb0d3a062f00471c149bf8e8ee7eaef0df67968aef3870677e63ed898aa00d" exitCode=0 Jan 31 14:48:39 crc kubenswrapper[4751]: I0131 14:48:39.512006 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerDied","Data":"45cb0d3a062f00471c149bf8e8ee7eaef0df67968aef3870677e63ed898aa00d"} Jan 31 14:48:39 crc kubenswrapper[4751]: I0131 14:48:39.512234 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerStarted","Data":"ef29f0f695de11b302d97f5ade678c0ae9fdc43953c2430b685d7fd276ee3217"} Jan 31 14:48:39 crc kubenswrapper[4751]: I0131 14:48:39.512272 4751 scope.go:117] "RemoveContainer" containerID="3956d143be77f4a50143f9678eb51ab7871e250cae73d87c9e7fce2575e466c2" Jan 31 14:50:36 crc kubenswrapper[4751]: I0131 14:50:36.754271 4751 scope.go:117] "RemoveContainer" containerID="8e402889398f0b5d93bacd46f42378e3cdc7f2ee478995578d04804d8ec0f029" Jan 31 14:50:36 crc kubenswrapper[4751]: I0131 14:50:36.780421 4751 scope.go:117] "RemoveContainer" containerID="96a0531e47323a9257c24b651a7067cc71a6c2a1c9189022bfa8c72e23c446c1" Jan 31 14:51:08 crc kubenswrapper[4751]: I0131 14:51:08.897367 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:51:08 crc kubenswrapper[4751]: I0131 14:51:08.898462 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:51:38 crc kubenswrapper[4751]: I0131 14:51:38.897321 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:51:38 crc kubenswrapper[4751]: I0131 14:51:38.898185 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:52:08 crc kubenswrapper[4751]: I0131 14:52:08.896842 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:52:08 crc kubenswrapper[4751]: I0131 14:52:08.897433 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:52:08 crc kubenswrapper[4751]: I0131 14:52:08.897490 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:52:08 crc kubenswrapper[4751]: I0131 14:52:08.898303 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"ef29f0f695de11b302d97f5ade678c0ae9fdc43953c2430b685d7fd276ee3217"} pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 14:52:08 crc kubenswrapper[4751]: I0131 14:52:08.898394 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" containerID="cri-o://ef29f0f695de11b302d97f5ade678c0ae9fdc43953c2430b685d7fd276ee3217" gracePeriod=600 Jan 31 14:52:09 crc kubenswrapper[4751]: I0131 14:52:09.977361 4751 generic.go:334] "Generic (PLEG): container finished" podID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerID="ef29f0f695de11b302d97f5ade678c0ae9fdc43953c2430b685d7fd276ee3217" exitCode=0 Jan 31 14:52:09 crc kubenswrapper[4751]: I0131 14:52:09.977885 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerDied","Data":"ef29f0f695de11b302d97f5ade678c0ae9fdc43953c2430b685d7fd276ee3217"} Jan 31 14:52:09 crc kubenswrapper[4751]: I0131 14:52:09.978538 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerStarted","Data":"f4d4f92719c72ec0adb31e02a10d5c8bcb4b1a9b3bfb5b0e7ed8cfdbc4a53235"} Jan 31 14:52:09 crc kubenswrapper[4751]: I0131 14:52:09.978639 4751 scope.go:117] "RemoveContainer" containerID="45cb0d3a062f00471c149bf8e8ee7eaef0df67968aef3870677e63ed898aa00d" Jan 31 14:52:14 crc kubenswrapper[4751]: I0131 14:52:14.655141 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8cdt"] Jan 31 14:52:14 crc kubenswrapper[4751]: I0131 14:52:14.656536 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovn-controller" containerID="cri-o://e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148" gracePeriod=30 Jan 31 14:52:14 crc kubenswrapper[4751]: I0131 14:52:14.657618 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="sbdb" containerID="cri-o://e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c" gracePeriod=30 Jan 31 14:52:14 crc kubenswrapper[4751]: I0131 14:52:14.657701 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="nbdb" containerID="cri-o://701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232" gracePeriod=30 Jan 31 14:52:14 crc kubenswrapper[4751]: I0131 14:52:14.657761 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="northd" containerID="cri-o://5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010" gracePeriod=30 Jan 31 14:52:14 crc kubenswrapper[4751]: I0131 14:52:14.657816 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de" gracePeriod=30 Jan 31 14:52:14 crc kubenswrapper[4751]: I0131 14:52:14.657882 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="kube-rbac-proxy-node" containerID="cri-o://f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba" gracePeriod=30 Jan 31 14:52:14 crc kubenswrapper[4751]: I0131 14:52:14.658021 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovn-acl-logging" containerID="cri-o://357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74" gracePeriod=30 Jan 31 14:52:14 crc kubenswrapper[4751]: I0131 14:52:14.708425 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" containerID="cri-o://153c98b7ebe36043f7ae094ec4ae3226c12652e95174c4ff2d00efc441bdb785" gracePeriod=30 Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.017807 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/3.log" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.021794 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovn-acl-logging/0.log" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.028473 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovnkube-controller/3.log" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.028637 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovn-controller/0.log" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.029728 4751 generic.go:334] "Generic (PLEG): container finished" podID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerID="153c98b7ebe36043f7ae094ec4ae3226c12652e95174c4ff2d00efc441bdb785" exitCode=0 Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.029788 4751 generic.go:334] "Generic (PLEG): container finished" podID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerID="e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c" exitCode=0 Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.029810 4751 generic.go:334] "Generic (PLEG): container finished" podID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerID="701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232" exitCode=0 Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.029812 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerDied","Data":"153c98b7ebe36043f7ae094ec4ae3226c12652e95174c4ff2d00efc441bdb785"} Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.029834 4751 generic.go:334] "Generic (PLEG): container finished" podID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerID="5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010" exitCode=0 Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.029852 4751 generic.go:334] "Generic (PLEG): container finished" podID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerID="20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de" exitCode=0 Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.029874 4751 generic.go:334] "Generic (PLEG): container finished" podID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerID="f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba" exitCode=0 Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.029894 4751 scope.go:117] "RemoveContainer" containerID="373c89defd0c3e17f3124be6af9afba6b241a48af85f558bb51d281d16ba27ac" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.029894 4751 generic.go:334] "Generic (PLEG): container finished" podID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerID="357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74" exitCode=143 Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.030020 4751 generic.go:334] "Generic (PLEG): container finished" podID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerID="e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148" exitCode=143 Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.029875 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerDied","Data":"e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c"} Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.030124 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerDied","Data":"701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232"} Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.030153 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerDied","Data":"5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010"} Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.030168 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerDied","Data":"20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de"} Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.030181 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerDied","Data":"f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba"} Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.030193 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerDied","Data":"357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74"} Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.030206 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerDied","Data":"e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148"} Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.030219 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" event={"ID":"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b","Type":"ContainerDied","Data":"4bbc5e8f3ce6775d094673644f5cb7355eba674b33cab2a960c6b275357e72b8"} Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.030236 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4bbc5e8f3ce6775d094673644f5cb7355eba674b33cab2a960c6b275357e72b8" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.031362 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovn-acl-logging/0.log" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.032212 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovn-controller/0.log" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.033330 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rtthp_e7dd989b-33df-4562-a60b-f273428fea3d/kube-multus/2.log" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.033379 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.033888 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rtthp_e7dd989b-33df-4562-a60b-f273428fea3d/kube-multus/1.log" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.033951 4751 generic.go:334] "Generic (PLEG): container finished" podID="e7dd989b-33df-4562-a60b-f273428fea3d" containerID="98a2f0e75ca2c214fba50a70792a41195e5b7e674dbe1ae5b98cd015b7526483" exitCode=2 Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.033990 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rtthp" event={"ID":"e7dd989b-33df-4562-a60b-f273428fea3d","Type":"ContainerDied","Data":"98a2f0e75ca2c214fba50a70792a41195e5b7e674dbe1ae5b98cd015b7526483"} Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.034846 4751 scope.go:117] "RemoveContainer" containerID="98a2f0e75ca2c214fba50a70792a41195e5b7e674dbe1ae5b98cd015b7526483" Jan 31 14:52:15 crc kubenswrapper[4751]: E0131 14:52:15.035386 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rtthp_openshift-multus(e7dd989b-33df-4562-a60b-f273428fea3d)\"" pod="openshift-multus/multus-rtthp" podUID="e7dd989b-33df-4562-a60b-f273428fea3d" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.053342 4751 scope.go:117] "RemoveContainer" containerID="2bdcbdac0cc4b17e027947c041a0ee4a7d7f549aa6dbe5c07c370ca7c0c50475" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.094214 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9bvhc"] Jan 31 14:52:15 crc kubenswrapper[4751]: E0131 14:52:15.094878 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="kube-rbac-proxy-node" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.095019 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="kube-rbac-proxy-node" Jan 31 14:52:15 crc kubenswrapper[4751]: E0131 14:52:15.095180 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.095282 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 14:52:15 crc kubenswrapper[4751]: E0131 14:52:15.095377 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="kubecfg-setup" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.095478 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="kubecfg-setup" Jan 31 14:52:15 crc kubenswrapper[4751]: E0131 14:52:15.095590 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="northd" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.095689 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="northd" Jan 31 14:52:15 crc kubenswrapper[4751]: E0131 14:52:15.095804 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="nbdb" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.095905 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="nbdb" Jan 31 14:52:15 crc kubenswrapper[4751]: E0131 14:52:15.096004 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovn-acl-logging" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.096130 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovn-acl-logging" Jan 31 14:52:15 crc kubenswrapper[4751]: E0131 14:52:15.096267 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.096370 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: E0131 14:52:15.096475 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.096574 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: E0131 14:52:15.096671 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4e18e163-6cf0-48ef-9a6f-90cbece870b0" containerName="registry" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.096770 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4e18e163-6cf0-48ef-9a6f-90cbece870b0" containerName="registry" Jan 31 14:52:15 crc kubenswrapper[4751]: E0131 14:52:15.096874 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.096968 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: E0131 14:52:15.097109 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.097227 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: E0131 14:52:15.097334 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovn-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.097433 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovn-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: E0131 14:52:15.097538 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="sbdb" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.097629 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="sbdb" Jan 31 14:52:15 crc kubenswrapper[4751]: E0131 14:52:15.097728 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.097852 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.098142 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.098275 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="sbdb" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.098382 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.098484 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="kube-rbac-proxy-node" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.098630 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.098763 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4e18e163-6cf0-48ef-9a6f-90cbece870b0" containerName="registry" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.098861 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.098959 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="nbdb" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.099060 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovn-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.099201 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovn-acl-logging" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.099301 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="kube-rbac-proxy-ovn-metrics" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.099437 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="northd" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.099864 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" containerName="ovnkube-controller" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.102940 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.106755 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovn-node-metrics-cert\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.106823 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zhmb7\" (UniqueName: \"kubernetes.io/projected/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-kube-api-access-zhmb7\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.106857 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-systemd\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.106883 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-run-netns\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.106902 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-log-socket\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.106920 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovnkube-config\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.106958 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-ovn\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.106982 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107014 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-slash\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107037 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-kubelet\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107091 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovnkube-script-lib\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107142 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-cni-bin\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107163 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-systemd-units\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107192 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-cni-netd\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107216 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-etc-openvswitch\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107244 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-run-ovn-kubernetes\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107259 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-node-log\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107302 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-env-overrides\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107318 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-openvswitch\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107334 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-var-lib-openvswitch\") pod \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\" (UID: \"ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b\") " Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107514 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107605 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-log-socket" (OuterVolumeSpecName: "log-socket") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107674 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107750 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107801 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107854 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-slash" (OuterVolumeSpecName: "host-slash") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107917 4751 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-log-socket\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107942 4751 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107959 4751 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.107998 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.108096 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.108143 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.108166 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.108193 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.108199 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-node-log" (OuterVolumeSpecName: "node-log") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.108240 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.108293 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.108326 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.108584 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.108623 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.114166 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.116991 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-kube-api-access-zhmb7" (OuterVolumeSpecName: "kube-api-access-zhmb7") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "kube-api-access-zhmb7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.122210 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" (UID: "ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.208921 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-run-systemd\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.209429 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.209476 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-slash\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.209512 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5c1c10ff-f217-4a26-8bd1-7d4642d08976-ovnkube-config\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.209630 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lqqmg\" (UniqueName: \"kubernetes.io/projected/5c1c10ff-f217-4a26-8bd1-7d4642d08976-kube-api-access-lqqmg\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.209686 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-cni-netd\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.209733 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-var-lib-openvswitch\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.209790 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-node-log\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.209839 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5c1c10ff-f217-4a26-8bd1-7d4642d08976-env-overrides\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.209959 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-kubelet\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210026 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-cni-bin\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210127 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-run-netns\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210175 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-run-ovn\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210261 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5c1c10ff-f217-4a26-8bd1-7d4642d08976-ovnkube-script-lib\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210326 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-log-socket\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210385 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-run-ovn-kubernetes\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210500 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-run-openvswitch\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210632 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5c1c10ff-f217-4a26-8bd1-7d4642d08976-ovn-node-metrics-cert\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210696 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-systemd-units\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210742 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-etc-openvswitch\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210889 4751 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210918 4751 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210941 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210960 4751 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210977 4751 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-slash\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.210995 4751 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.211013 4751 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.211032 4751 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.211050 4751 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.211093 4751 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.211130 4751 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-node-log\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.211149 4751 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.211169 4751 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.211186 4751 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.211203 4751 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.211220 4751 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.211238 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zhmb7\" (UniqueName: \"kubernetes.io/projected/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b-kube-api-access-zhmb7\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.312363 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-kubelet\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.312450 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-cni-bin\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.312501 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-run-netns\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.312532 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-run-ovn\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.312541 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-kubelet\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.312570 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5c1c10ff-f217-4a26-8bd1-7d4642d08976-ovnkube-script-lib\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.312615 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-cni-bin\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.312809 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-run-netns\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.312853 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-log-socket\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.312884 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-run-ovn\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.312896 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-run-ovn-kubernetes\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.312950 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-run-openvswitch\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.312971 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-log-socket\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313011 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-run-ovn-kubernetes\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313021 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5c1c10ff-f217-4a26-8bd1-7d4642d08976-ovn-node-metrics-cert\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313112 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-systemd-units\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313018 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-run-openvswitch\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313146 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-etc-openvswitch\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313181 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-systemd-units\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313196 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-run-systemd\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313225 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313245 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-slash\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313264 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5c1c10ff-f217-4a26-8bd1-7d4642d08976-ovnkube-config\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313296 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lqqmg\" (UniqueName: \"kubernetes.io/projected/5c1c10ff-f217-4a26-8bd1-7d4642d08976-kube-api-access-lqqmg\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313318 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-cni-netd\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313343 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-var-lib-openvswitch\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313343 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313376 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-node-log\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313358 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-node-log\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313401 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-slash\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313430 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5c1c10ff-f217-4a26-8bd1-7d4642d08976-env-overrides\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313621 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-run-systemd\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313735 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-host-cni-netd\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313290 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-etc-openvswitch\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313799 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/5c1c10ff-f217-4a26-8bd1-7d4642d08976-ovnkube-script-lib\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.313818 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/5c1c10ff-f217-4a26-8bd1-7d4642d08976-var-lib-openvswitch\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.314416 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/5c1c10ff-f217-4a26-8bd1-7d4642d08976-ovnkube-config\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.314826 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/5c1c10ff-f217-4a26-8bd1-7d4642d08976-env-overrides\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.318553 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/5c1c10ff-f217-4a26-8bd1-7d4642d08976-ovn-node-metrics-cert\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.337586 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lqqmg\" (UniqueName: \"kubernetes.io/projected/5c1c10ff-f217-4a26-8bd1-7d4642d08976-kube-api-access-lqqmg\") pod \"ovnkube-node-9bvhc\" (UID: \"5c1c10ff-f217-4a26-8bd1-7d4642d08976\") " pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: I0131 14:52:15.421240 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:15 crc kubenswrapper[4751]: W0131 14:52:15.454960 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5c1c10ff_f217_4a26_8bd1_7d4642d08976.slice/crio-d0edb5a30a101580494e50b6ac133f141a917c7ad10f4f7b5dad725c96af65ab WatchSource:0}: Error finding container d0edb5a30a101580494e50b6ac133f141a917c7ad10f4f7b5dad725c96af65ab: Status 404 returned error can't find the container with id d0edb5a30a101580494e50b6ac133f141a917c7ad10f4f7b5dad725c96af65ab Jan 31 14:52:16 crc kubenswrapper[4751]: I0131 14:52:16.042473 4751 generic.go:334] "Generic (PLEG): container finished" podID="5c1c10ff-f217-4a26-8bd1-7d4642d08976" containerID="d1c543d3283531f95fc28795636b70ac7c361ee5585f2bced078691fc7907cae" exitCode=0 Jan 31 14:52:16 crc kubenswrapper[4751]: I0131 14:52:16.042558 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" event={"ID":"5c1c10ff-f217-4a26-8bd1-7d4642d08976","Type":"ContainerDied","Data":"d1c543d3283531f95fc28795636b70ac7c361ee5585f2bced078691fc7907cae"} Jan 31 14:52:16 crc kubenswrapper[4751]: I0131 14:52:16.042972 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" event={"ID":"5c1c10ff-f217-4a26-8bd1-7d4642d08976","Type":"ContainerStarted","Data":"d0edb5a30a101580494e50b6ac133f141a917c7ad10f4f7b5dad725c96af65ab"} Jan 31 14:52:16 crc kubenswrapper[4751]: I0131 14:52:16.044770 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rtthp_e7dd989b-33df-4562-a60b-f273428fea3d/kube-multus/2.log" Jan 31 14:52:16 crc kubenswrapper[4751]: I0131 14:52:16.049884 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovn-acl-logging/0.log" Jan 31 14:52:16 crc kubenswrapper[4751]: I0131 14:52:16.050536 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-n8cdt_ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/ovn-controller/0.log" Jan 31 14:52:16 crc kubenswrapper[4751]: I0131 14:52:16.051109 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-n8cdt" Jan 31 14:52:16 crc kubenswrapper[4751]: I0131 14:52:16.130131 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8cdt"] Jan 31 14:52:16 crc kubenswrapper[4751]: I0131 14:52:16.134464 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-n8cdt"] Jan 31 14:52:16 crc kubenswrapper[4751]: I0131 14:52:16.419238 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b" path="/var/lib/kubelet/pods/ceef6ba7-8d2d-4105-beee-6a8bdfd12c9b/volumes" Jan 31 14:52:17 crc kubenswrapper[4751]: I0131 14:52:17.060828 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" event={"ID":"5c1c10ff-f217-4a26-8bd1-7d4642d08976","Type":"ContainerStarted","Data":"6be2c0aab27a1ea907b6e1922671cfb2dd1e74f772feba152bb0a87106238934"} Jan 31 14:52:17 crc kubenswrapper[4751]: I0131 14:52:17.061277 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" event={"ID":"5c1c10ff-f217-4a26-8bd1-7d4642d08976","Type":"ContainerStarted","Data":"af8e7735fca292c3fa8c7d13ef2c2bed0aff024bf3578472b0f8e0537d6e1eac"} Jan 31 14:52:17 crc kubenswrapper[4751]: I0131 14:52:17.061296 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" event={"ID":"5c1c10ff-f217-4a26-8bd1-7d4642d08976","Type":"ContainerStarted","Data":"5cd9a556e224cc7ab141d373b791f2ede7c5a7ecdb33c8aaa303590991f20727"} Jan 31 14:52:17 crc kubenswrapper[4751]: I0131 14:52:17.061308 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" event={"ID":"5c1c10ff-f217-4a26-8bd1-7d4642d08976","Type":"ContainerStarted","Data":"fea6867b3acc68182fd2a6da399b36c6d7d7d0503753c6ce288b1a45e8a6ecbe"} Jan 31 14:52:17 crc kubenswrapper[4751]: I0131 14:52:17.061320 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" event={"ID":"5c1c10ff-f217-4a26-8bd1-7d4642d08976","Type":"ContainerStarted","Data":"79095c5483fb640b1ae163e960478ba096385b102267c80108d34889fd4fd3df"} Jan 31 14:52:17 crc kubenswrapper[4751]: I0131 14:52:17.061332 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" event={"ID":"5c1c10ff-f217-4a26-8bd1-7d4642d08976","Type":"ContainerStarted","Data":"1dff3d05a267b9c9481ca9057c01eae797fb8297693b00d48d276f5b7ae061e3"} Jan 31 14:52:20 crc kubenswrapper[4751]: I0131 14:52:20.090889 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" event={"ID":"5c1c10ff-f217-4a26-8bd1-7d4642d08976","Type":"ContainerStarted","Data":"4e30e3a765c67d68b9bd75d540c6f046d7b7840c6e140d3677658fd08f94febb"} Jan 31 14:52:22 crc kubenswrapper[4751]: I0131 14:52:22.109155 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" event={"ID":"5c1c10ff-f217-4a26-8bd1-7d4642d08976","Type":"ContainerStarted","Data":"b96fc3c9debec3be139ce32a8916361e28ffb3782ac29f4639739f6635a0b9c2"} Jan 31 14:52:22 crc kubenswrapper[4751]: I0131 14:52:22.110010 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:22 crc kubenswrapper[4751]: I0131 14:52:22.110026 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:22 crc kubenswrapper[4751]: I0131 14:52:22.141198 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:22 crc kubenswrapper[4751]: I0131 14:52:22.146113 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" podStartSLOduration=7.14609324 podStartE2EDuration="7.14609324s" podCreationTimestamp="2026-01-31 14:52:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:52:22.144371745 +0000 UTC m=+646.519084630" watchObservedRunningTime="2026-01-31 14:52:22.14609324 +0000 UTC m=+646.520806135" Jan 31 14:52:23 crc kubenswrapper[4751]: I0131 14:52:23.115908 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:23 crc kubenswrapper[4751]: I0131 14:52:23.154211 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:26 crc kubenswrapper[4751]: I0131 14:52:26.411393 4751 scope.go:117] "RemoveContainer" containerID="98a2f0e75ca2c214fba50a70792a41195e5b7e674dbe1ae5b98cd015b7526483" Jan 31 14:52:26 crc kubenswrapper[4751]: E0131 14:52:26.414341 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-rtthp_openshift-multus(e7dd989b-33df-4562-a60b-f273428fea3d)\"" pod="openshift-multus/multus-rtthp" podUID="e7dd989b-33df-4562-a60b-f273428fea3d" Jan 31 14:52:36 crc kubenswrapper[4751]: I0131 14:52:36.831978 4751 scope.go:117] "RemoveContainer" containerID="e8f6719bdf2fef7e21f7c5cdcb3de4b4f08e69bd62e22cb10fe157f638b3c148" Jan 31 14:52:36 crc kubenswrapper[4751]: I0131 14:52:36.861192 4751 scope.go:117] "RemoveContainer" containerID="153c98b7ebe36043f7ae094ec4ae3226c12652e95174c4ff2d00efc441bdb785" Jan 31 14:52:36 crc kubenswrapper[4751]: I0131 14:52:36.881957 4751 scope.go:117] "RemoveContainer" containerID="5623c591d9a6806335a9b671648fc172e749568a7ad29c3bd41f1a990ba56010" Jan 31 14:52:36 crc kubenswrapper[4751]: I0131 14:52:36.900656 4751 scope.go:117] "RemoveContainer" containerID="20273cc05bc490f8dd5258dccd02aa0e97d1d22c159ce2c177594920f4af83de" Jan 31 14:52:36 crc kubenswrapper[4751]: I0131 14:52:36.922438 4751 scope.go:117] "RemoveContainer" containerID="122856fd111512aaab2292d448505fb13a6bea5859652ab4e8bb512ee8d6aba9" Jan 31 14:52:36 crc kubenswrapper[4751]: I0131 14:52:36.948511 4751 scope.go:117] "RemoveContainer" containerID="f3446cde960ab284f70a338a9a4bfee334fdbdef503d868692fb847f4df6acba" Jan 31 14:52:36 crc kubenswrapper[4751]: I0131 14:52:36.979000 4751 scope.go:117] "RemoveContainer" containerID="357afc89f219ac34037aaeb7086076b36f05907502c53006aeb588eaf440cc74" Jan 31 14:52:36 crc kubenswrapper[4751]: I0131 14:52:36.993027 4751 scope.go:117] "RemoveContainer" containerID="e7f5e784895f315aa4d8fb6039b9e70c46f977a80fdd5d39ead81d75048abd6c" Jan 31 14:52:37 crc kubenswrapper[4751]: I0131 14:52:37.006152 4751 scope.go:117] "RemoveContainer" containerID="701178503ef461547e687442de0607af20cee8bca075347d1d1cb2a439562232" Jan 31 14:52:38 crc kubenswrapper[4751]: I0131 14:52:38.406813 4751 scope.go:117] "RemoveContainer" containerID="98a2f0e75ca2c214fba50a70792a41195e5b7e674dbe1ae5b98cd015b7526483" Jan 31 14:52:39 crc kubenswrapper[4751]: I0131 14:52:39.243438 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-rtthp_e7dd989b-33df-4562-a60b-f273428fea3d/kube-multus/2.log" Jan 31 14:52:39 crc kubenswrapper[4751]: I0131 14:52:39.243798 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-rtthp" event={"ID":"e7dd989b-33df-4562-a60b-f273428fea3d","Type":"ContainerStarted","Data":"8207fb66e68bce2bc8a8e7120aa05f6d77ac2b5b91eb20ca05dc568acd4bd4aa"} Jan 31 14:52:45 crc kubenswrapper[4751]: I0131 14:52:45.450807 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9bvhc" Jan 31 14:52:48 crc kubenswrapper[4751]: I0131 14:52:48.796682 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz"] Jan 31 14:52:48 crc kubenswrapper[4751]: I0131 14:52:48.799333 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" Jan 31 14:52:48 crc kubenswrapper[4751]: I0131 14:52:48.802491 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 31 14:52:48 crc kubenswrapper[4751]: I0131 14:52:48.810414 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz"] Jan 31 14:52:48 crc kubenswrapper[4751]: I0131 14:52:48.948657 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz\" (UID: \"f3380dc7-49d9-4d61-a0bb-003c1c5e2742\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" Jan 31 14:52:48 crc kubenswrapper[4751]: I0131 14:52:48.948769 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz\" (UID: \"f3380dc7-49d9-4d61-a0bb-003c1c5e2742\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" Jan 31 14:52:48 crc kubenswrapper[4751]: I0131 14:52:48.948836 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zvc7\" (UniqueName: \"kubernetes.io/projected/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-kube-api-access-4zvc7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz\" (UID: \"f3380dc7-49d9-4d61-a0bb-003c1c5e2742\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" Jan 31 14:52:49 crc kubenswrapper[4751]: I0131 14:52:49.050189 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz\" (UID: \"f3380dc7-49d9-4d61-a0bb-003c1c5e2742\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" Jan 31 14:52:49 crc kubenswrapper[4751]: I0131 14:52:49.050305 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz\" (UID: \"f3380dc7-49d9-4d61-a0bb-003c1c5e2742\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" Jan 31 14:52:49 crc kubenswrapper[4751]: I0131 14:52:49.050378 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zvc7\" (UniqueName: \"kubernetes.io/projected/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-kube-api-access-4zvc7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz\" (UID: \"f3380dc7-49d9-4d61-a0bb-003c1c5e2742\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" Jan 31 14:52:49 crc kubenswrapper[4751]: I0131 14:52:49.050956 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-util\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz\" (UID: \"f3380dc7-49d9-4d61-a0bb-003c1c5e2742\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" Jan 31 14:52:49 crc kubenswrapper[4751]: I0131 14:52:49.050978 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-bundle\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz\" (UID: \"f3380dc7-49d9-4d61-a0bb-003c1c5e2742\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" Jan 31 14:52:49 crc kubenswrapper[4751]: I0131 14:52:49.091358 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zvc7\" (UniqueName: \"kubernetes.io/projected/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-kube-api-access-4zvc7\") pod \"270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz\" (UID: \"f3380dc7-49d9-4d61-a0bb-003c1c5e2742\") " pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" Jan 31 14:52:49 crc kubenswrapper[4751]: I0131 14:52:49.117320 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" Jan 31 14:52:49 crc kubenswrapper[4751]: I0131 14:52:49.634127 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz"] Jan 31 14:52:50 crc kubenswrapper[4751]: I0131 14:52:50.308421 4751 generic.go:334] "Generic (PLEG): container finished" podID="f3380dc7-49d9-4d61-a0bb-003c1c5e2742" containerID="3efd438a3705eea1a3efedddcd876eeacfb008f62b18f56c151922dc22bf9158" exitCode=0 Jan 31 14:52:50 crc kubenswrapper[4751]: I0131 14:52:50.308478 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" event={"ID":"f3380dc7-49d9-4d61-a0bb-003c1c5e2742","Type":"ContainerDied","Data":"3efd438a3705eea1a3efedddcd876eeacfb008f62b18f56c151922dc22bf9158"} Jan 31 14:52:50 crc kubenswrapper[4751]: I0131 14:52:50.308749 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" event={"ID":"f3380dc7-49d9-4d61-a0bb-003c1c5e2742","Type":"ContainerStarted","Data":"40f87475e03d21ed5061858cc5fb9fd5c9dea64b23c1bf0fe86b08af74845ea6"} Jan 31 14:52:50 crc kubenswrapper[4751]: I0131 14:52:50.310592 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 14:52:51 crc kubenswrapper[4751]: I0131 14:52:51.316397 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" event={"ID":"f3380dc7-49d9-4d61-a0bb-003c1c5e2742","Type":"ContainerStarted","Data":"601bf14e8637916fa045ca6931615dde91e0b3bf3c4252b9f52fbc9df9e4bd03"} Jan 31 14:52:52 crc kubenswrapper[4751]: I0131 14:52:52.323687 4751 generic.go:334] "Generic (PLEG): container finished" podID="f3380dc7-49d9-4d61-a0bb-003c1c5e2742" containerID="601bf14e8637916fa045ca6931615dde91e0b3bf3c4252b9f52fbc9df9e4bd03" exitCode=0 Jan 31 14:52:52 crc kubenswrapper[4751]: I0131 14:52:52.323814 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" event={"ID":"f3380dc7-49d9-4d61-a0bb-003c1c5e2742","Type":"ContainerDied","Data":"601bf14e8637916fa045ca6931615dde91e0b3bf3c4252b9f52fbc9df9e4bd03"} Jan 31 14:52:53 crc kubenswrapper[4751]: I0131 14:52:53.330203 4751 generic.go:334] "Generic (PLEG): container finished" podID="f3380dc7-49d9-4d61-a0bb-003c1c5e2742" containerID="9ec8aed50894d62e18f3fb826346eabfdce4a94c9f2fae5ad6a9c65b1db62d96" exitCode=0 Jan 31 14:52:53 crc kubenswrapper[4751]: I0131 14:52:53.330250 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" event={"ID":"f3380dc7-49d9-4d61-a0bb-003c1c5e2742","Type":"ContainerDied","Data":"9ec8aed50894d62e18f3fb826346eabfdce4a94c9f2fae5ad6a9c65b1db62d96"} Jan 31 14:52:54 crc kubenswrapper[4751]: I0131 14:52:54.610776 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" Jan 31 14:52:54 crc kubenswrapper[4751]: I0131 14:52:54.720677 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-bundle\") pod \"f3380dc7-49d9-4d61-a0bb-003c1c5e2742\" (UID: \"f3380dc7-49d9-4d61-a0bb-003c1c5e2742\") " Jan 31 14:52:54 crc kubenswrapper[4751]: I0131 14:52:54.720893 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-util\") pod \"f3380dc7-49d9-4d61-a0bb-003c1c5e2742\" (UID: \"f3380dc7-49d9-4d61-a0bb-003c1c5e2742\") " Jan 31 14:52:54 crc kubenswrapper[4751]: I0131 14:52:54.720982 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4zvc7\" (UniqueName: \"kubernetes.io/projected/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-kube-api-access-4zvc7\") pod \"f3380dc7-49d9-4d61-a0bb-003c1c5e2742\" (UID: \"f3380dc7-49d9-4d61-a0bb-003c1c5e2742\") " Jan 31 14:52:54 crc kubenswrapper[4751]: I0131 14:52:54.722627 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-bundle" (OuterVolumeSpecName: "bundle") pod "f3380dc7-49d9-4d61-a0bb-003c1c5e2742" (UID: "f3380dc7-49d9-4d61-a0bb-003c1c5e2742"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:52:54 crc kubenswrapper[4751]: I0131 14:52:54.729322 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-kube-api-access-4zvc7" (OuterVolumeSpecName: "kube-api-access-4zvc7") pod "f3380dc7-49d9-4d61-a0bb-003c1c5e2742" (UID: "f3380dc7-49d9-4d61-a0bb-003c1c5e2742"). InnerVolumeSpecName "kube-api-access-4zvc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:52:54 crc kubenswrapper[4751]: I0131 14:52:54.750121 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-util" (OuterVolumeSpecName: "util") pod "f3380dc7-49d9-4d61-a0bb-003c1c5e2742" (UID: "f3380dc7-49d9-4d61-a0bb-003c1c5e2742"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:52:54 crc kubenswrapper[4751]: I0131 14:52:54.823145 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4zvc7\" (UniqueName: \"kubernetes.io/projected/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-kube-api-access-4zvc7\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:54 crc kubenswrapper[4751]: I0131 14:52:54.823195 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:54 crc kubenswrapper[4751]: I0131 14:52:54.823218 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/f3380dc7-49d9-4d61-a0bb-003c1c5e2742-util\") on node \"crc\" DevicePath \"\"" Jan 31 14:52:55 crc kubenswrapper[4751]: I0131 14:52:55.351131 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" event={"ID":"f3380dc7-49d9-4d61-a0bb-003c1c5e2742","Type":"ContainerDied","Data":"40f87475e03d21ed5061858cc5fb9fd5c9dea64b23c1bf0fe86b08af74845ea6"} Jan 31 14:52:55 crc kubenswrapper[4751]: I0131 14:52:55.351183 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="40f87475e03d21ed5061858cc5fb9fd5c9dea64b23c1bf0fe86b08af74845ea6" Jan 31 14:52:55 crc kubenswrapper[4751]: I0131 14:52:55.351235 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.307771 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn"] Jan 31 14:53:04 crc kubenswrapper[4751]: E0131 14:53:04.308472 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3380dc7-49d9-4d61-a0bb-003c1c5e2742" containerName="util" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.308484 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3380dc7-49d9-4d61-a0bb-003c1c5e2742" containerName="util" Jan 31 14:53:04 crc kubenswrapper[4751]: E0131 14:53:04.308493 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3380dc7-49d9-4d61-a0bb-003c1c5e2742" containerName="extract" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.308499 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3380dc7-49d9-4d61-a0bb-003c1c5e2742" containerName="extract" Jan 31 14:53:04 crc kubenswrapper[4751]: E0131 14:53:04.308517 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3380dc7-49d9-4d61-a0bb-003c1c5e2742" containerName="pull" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.308524 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3380dc7-49d9-4d61-a0bb-003c1c5e2742" containerName="pull" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.308613 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3380dc7-49d9-4d61-a0bb-003c1c5e2742" containerName="extract" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.308989 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.310212 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-cert" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.311130 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-controller-manager-service-cert" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.311252 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"kube-root-ca.crt" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.311300 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"openshift-service-ca.crt" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.311348 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"manager-account-dockercfg-zdzdv" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.323636 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn"] Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.446937 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd60e998-83e4-442a-98ac-c4e33d4b4765-apiservice-cert\") pod \"metallb-operator-controller-manager-6b999687d7-vf7mn\" (UID: \"bd60e998-83e4-442a-98ac-c4e33d4b4765\") " pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.447318 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd60e998-83e4-442a-98ac-c4e33d4b4765-webhook-cert\") pod \"metallb-operator-controller-manager-6b999687d7-vf7mn\" (UID: \"bd60e998-83e4-442a-98ac-c4e33d4b4765\") " pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.447352 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpgrt\" (UniqueName: \"kubernetes.io/projected/bd60e998-83e4-442a-98ac-c4e33d4b4765-kube-api-access-tpgrt\") pod \"metallb-operator-controller-manager-6b999687d7-vf7mn\" (UID: \"bd60e998-83e4-442a-98ac-c4e33d4b4765\") " pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.544823 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78"] Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.545777 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.547240 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-dockercfg-98hk2" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.547263 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.547629 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-operator-webhook-server-service-cert" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.547942 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd60e998-83e4-442a-98ac-c4e33d4b4765-webhook-cert\") pod \"metallb-operator-controller-manager-6b999687d7-vf7mn\" (UID: \"bd60e998-83e4-442a-98ac-c4e33d4b4765\") " pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.547980 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpgrt\" (UniqueName: \"kubernetes.io/projected/bd60e998-83e4-442a-98ac-c4e33d4b4765-kube-api-access-tpgrt\") pod \"metallb-operator-controller-manager-6b999687d7-vf7mn\" (UID: \"bd60e998-83e4-442a-98ac-c4e33d4b4765\") " pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.548020 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd60e998-83e4-442a-98ac-c4e33d4b4765-apiservice-cert\") pod \"metallb-operator-controller-manager-6b999687d7-vf7mn\" (UID: \"bd60e998-83e4-442a-98ac-c4e33d4b4765\") " pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.555035 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/bd60e998-83e4-442a-98ac-c4e33d4b4765-webhook-cert\") pod \"metallb-operator-controller-manager-6b999687d7-vf7mn\" (UID: \"bd60e998-83e4-442a-98ac-c4e33d4b4765\") " pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.557914 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/bd60e998-83e4-442a-98ac-c4e33d4b4765-apiservice-cert\") pod \"metallb-operator-controller-manager-6b999687d7-vf7mn\" (UID: \"bd60e998-83e4-442a-98ac-c4e33d4b4765\") " pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.566659 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpgrt\" (UniqueName: \"kubernetes.io/projected/bd60e998-83e4-442a-98ac-c4e33d4b4765-kube-api-access-tpgrt\") pod \"metallb-operator-controller-manager-6b999687d7-vf7mn\" (UID: \"bd60e998-83e4-442a-98ac-c4e33d4b4765\") " pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.572089 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78"] Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.624566 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.649614 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01320eb9-ccb5-4593-866a-f49553fa7262-webhook-cert\") pod \"metallb-operator-webhook-server-5c46dd7d46-8xt78\" (UID: \"01320eb9-ccb5-4593-866a-f49553fa7262\") " pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.649673 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hnvst\" (UniqueName: \"kubernetes.io/projected/01320eb9-ccb5-4593-866a-f49553fa7262-kube-api-access-hnvst\") pod \"metallb-operator-webhook-server-5c46dd7d46-8xt78\" (UID: \"01320eb9-ccb5-4593-866a-f49553fa7262\") " pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.649729 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01320eb9-ccb5-4593-866a-f49553fa7262-apiservice-cert\") pod \"metallb-operator-webhook-server-5c46dd7d46-8xt78\" (UID: \"01320eb9-ccb5-4593-866a-f49553fa7262\") " pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.751600 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01320eb9-ccb5-4593-866a-f49553fa7262-webhook-cert\") pod \"metallb-operator-webhook-server-5c46dd7d46-8xt78\" (UID: \"01320eb9-ccb5-4593-866a-f49553fa7262\") " pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.751654 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hnvst\" (UniqueName: \"kubernetes.io/projected/01320eb9-ccb5-4593-866a-f49553fa7262-kube-api-access-hnvst\") pod \"metallb-operator-webhook-server-5c46dd7d46-8xt78\" (UID: \"01320eb9-ccb5-4593-866a-f49553fa7262\") " pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.751699 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01320eb9-ccb5-4593-866a-f49553fa7262-apiservice-cert\") pod \"metallb-operator-webhook-server-5c46dd7d46-8xt78\" (UID: \"01320eb9-ccb5-4593-866a-f49553fa7262\") " pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.755374 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/01320eb9-ccb5-4593-866a-f49553fa7262-webhook-cert\") pod \"metallb-operator-webhook-server-5c46dd7d46-8xt78\" (UID: \"01320eb9-ccb5-4593-866a-f49553fa7262\") " pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.755652 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/01320eb9-ccb5-4593-866a-f49553fa7262-apiservice-cert\") pod \"metallb-operator-webhook-server-5c46dd7d46-8xt78\" (UID: \"01320eb9-ccb5-4593-866a-f49553fa7262\") " pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.767990 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hnvst\" (UniqueName: \"kubernetes.io/projected/01320eb9-ccb5-4593-866a-f49553fa7262-kube-api-access-hnvst\") pod \"metallb-operator-webhook-server-5c46dd7d46-8xt78\" (UID: \"01320eb9-ccb5-4593-866a-f49553fa7262\") " pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.831210 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn"] Jan 31 14:53:04 crc kubenswrapper[4751]: W0131 14:53:04.844090 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbd60e998_83e4_442a_98ac_c4e33d4b4765.slice/crio-87d9ee3b1421f1fc472f483c122f5e08c3e3c2b1658c093f196ba55ec588a8e2 WatchSource:0}: Error finding container 87d9ee3b1421f1fc472f483c122f5e08c3e3c2b1658c093f196ba55ec588a8e2: Status 404 returned error can't find the container with id 87d9ee3b1421f1fc472f483c122f5e08c3e3c2b1658c093f196ba55ec588a8e2 Jan 31 14:53:04 crc kubenswrapper[4751]: I0131 14:53:04.900040 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" Jan 31 14:53:05 crc kubenswrapper[4751]: W0131 14:53:05.107797 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod01320eb9_ccb5_4593_866a_f49553fa7262.slice/crio-fa5fc459280f07966a5966cfa6c51e8689e2d93ddb81e4227e19d9c78d8a044e WatchSource:0}: Error finding container fa5fc459280f07966a5966cfa6c51e8689e2d93ddb81e4227e19d9c78d8a044e: Status 404 returned error can't find the container with id fa5fc459280f07966a5966cfa6c51e8689e2d93ddb81e4227e19d9c78d8a044e Jan 31 14:53:05 crc kubenswrapper[4751]: I0131 14:53:05.109403 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78"] Jan 31 14:53:05 crc kubenswrapper[4751]: I0131 14:53:05.405959 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" event={"ID":"bd60e998-83e4-442a-98ac-c4e33d4b4765","Type":"ContainerStarted","Data":"87d9ee3b1421f1fc472f483c122f5e08c3e3c2b1658c093f196ba55ec588a8e2"} Jan 31 14:53:05 crc kubenswrapper[4751]: I0131 14:53:05.408446 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" event={"ID":"01320eb9-ccb5-4593-866a-f49553fa7262","Type":"ContainerStarted","Data":"fa5fc459280f07966a5966cfa6c51e8689e2d93ddb81e4227e19d9c78d8a044e"} Jan 31 14:53:10 crc kubenswrapper[4751]: I0131 14:53:10.450043 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" event={"ID":"01320eb9-ccb5-4593-866a-f49553fa7262","Type":"ContainerStarted","Data":"2f7a224d466d03f11dd0cffdf6425400f947fda4e2e007bf4265a921bfaa57d4"} Jan 31 14:53:10 crc kubenswrapper[4751]: I0131 14:53:10.450698 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" Jan 31 14:53:10 crc kubenswrapper[4751]: I0131 14:53:10.453290 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" event={"ID":"bd60e998-83e4-442a-98ac-c4e33d4b4765","Type":"ContainerStarted","Data":"c931d1a3cbf0137ad695710fb71fc10ddceeefcd678f5a98931970128e0dd1f9"} Jan 31 14:53:10 crc kubenswrapper[4751]: I0131 14:53:10.453449 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" Jan 31 14:53:10 crc kubenswrapper[4751]: I0131 14:53:10.520404 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" podStartSLOduration=2.047920445 podStartE2EDuration="6.52037971s" podCreationTimestamp="2026-01-31 14:53:04 +0000 UTC" firstStartedPulling="2026-01-31 14:53:05.113147079 +0000 UTC m=+689.487859964" lastFinishedPulling="2026-01-31 14:53:09.585606344 +0000 UTC m=+693.960319229" observedRunningTime="2026-01-31 14:53:10.483340836 +0000 UTC m=+694.858053741" watchObservedRunningTime="2026-01-31 14:53:10.52037971 +0000 UTC m=+694.895092615" Jan 31 14:53:10 crc kubenswrapper[4751]: I0131 14:53:10.524807 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" podStartSLOduration=1.813497146 podStartE2EDuration="6.524788537s" podCreationTimestamp="2026-01-31 14:53:04 +0000 UTC" firstStartedPulling="2026-01-31 14:53:04.847416979 +0000 UTC m=+689.222129864" lastFinishedPulling="2026-01-31 14:53:09.55870837 +0000 UTC m=+693.933421255" observedRunningTime="2026-01-31 14:53:10.516959489 +0000 UTC m=+694.891672394" watchObservedRunningTime="2026-01-31 14:53:10.524788537 +0000 UTC m=+694.899501442" Jan 31 14:53:24 crc kubenswrapper[4751]: I0131 14:53:24.907583 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-webhook-server-5c46dd7d46-8xt78" Jan 31 14:53:44 crc kubenswrapper[4751]: I0131 14:53:44.627559 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/metallb-operator-controller-manager-6b999687d7-vf7mn" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.251169 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-9z9n2"] Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.254134 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.255336 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j"] Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.256124 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.256573 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-daemon-dockercfg-rqxft" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.256845 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-certs-secret" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.256957 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"frr-startup" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.257382 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"frr-k8s-webhook-server-cert" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.275626 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j"] Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.305180 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zkvz6\" (UniqueName: \"kubernetes.io/projected/b1f214e9-14db-462f-900c-3652ec7908e5-kube-api-access-zkvz6\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.305231 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b1f214e9-14db-462f-900c-3652ec7908e5-frr-startup\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.305637 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1f214e9-14db-462f-900c-3652ec7908e5-metrics-certs\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.305778 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b1f214e9-14db-462f-900c-3652ec7908e5-frr-sockets\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.305832 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b1f214e9-14db-462f-900c-3652ec7908e5-frr-conf\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.305879 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b1f214e9-14db-462f-900c-3652ec7908e5-reloader\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.305963 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b1f214e9-14db-462f-900c-3652ec7908e5-metrics\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.340000 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/speaker-qv6gh"] Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.341241 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qv6gh" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.345044 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-certs-secret" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.345114 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"speaker-dockercfg-sp9w4" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.345527 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-memberlist" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.346749 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"metallb-system"/"metallb-excludel2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.355993 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["metallb-system/controller-6968d8fdc4-6dhf9"] Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.356903 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6dhf9" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.360896 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"controller-certs-secret" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.369045 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6dhf9"] Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.406686 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b667c31-e911-496a-9c8b-12c906e724ec-cert\") pod \"controller-6968d8fdc4-6dhf9\" (UID: \"6b667c31-e911-496a-9c8b-12c906e724ec\") " pod="metallb-system/controller-6968d8fdc4-6dhf9" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.406724 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b1f214e9-14db-462f-900c-3652ec7908e5-metrics\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.406743 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-memberlist\") pod \"speaker-qv6gh\" (UID: \"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc\") " pod="metallb-system/speaker-qv6gh" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.406795 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zkvz6\" (UniqueName: \"kubernetes.io/projected/b1f214e9-14db-462f-900c-3652ec7908e5-kube-api-access-zkvz6\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.406818 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b1f214e9-14db-462f-900c-3652ec7908e5-frr-startup\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.406836 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94655b12-be6a-4043-8f7c-80d1b7fb1a2f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qf86j\" (UID: \"94655b12-be6a-4043-8f7c-80d1b7fb1a2f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.406852 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-metallb-excludel2\") pod \"speaker-qv6gh\" (UID: \"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc\") " pod="metallb-system/speaker-qv6gh" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.406872 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-metrics-certs\") pod \"speaker-qv6gh\" (UID: \"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc\") " pod="metallb-system/speaker-qv6gh" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.406894 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-prsbb\" (UniqueName: \"kubernetes.io/projected/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-kube-api-access-prsbb\") pod \"speaker-qv6gh\" (UID: \"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc\") " pod="metallb-system/speaker-qv6gh" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.406912 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1f214e9-14db-462f-900c-3652ec7908e5-metrics-certs\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.406944 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b1f214e9-14db-462f-900c-3652ec7908e5-frr-sockets\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.406957 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b1f214e9-14db-462f-900c-3652ec7908e5-frr-conf\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.407051 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics\" (UniqueName: \"kubernetes.io/empty-dir/b1f214e9-14db-462f-900c-3652ec7908e5-metrics\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.407078 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b1f214e9-14db-462f-900c-3652ec7908e5-reloader\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.407125 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-92mdn\" (UniqueName: \"kubernetes.io/projected/94655b12-be6a-4043-8f7c-80d1b7fb1a2f-kube-api-access-92mdn\") pod \"frr-k8s-webhook-server-7df86c4f6c-qf86j\" (UID: \"94655b12-be6a-4043-8f7c-80d1b7fb1a2f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.407147 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tjmxk\" (UniqueName: \"kubernetes.io/projected/6b667c31-e911-496a-9c8b-12c906e724ec-kube-api-access-tjmxk\") pod \"controller-6968d8fdc4-6dhf9\" (UID: \"6b667c31-e911-496a-9c8b-12c906e724ec\") " pod="metallb-system/controller-6968d8fdc4-6dhf9" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.407165 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-conf\" (UniqueName: \"kubernetes.io/empty-dir/b1f214e9-14db-462f-900c-3652ec7908e5-frr-conf\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.407173 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b667c31-e911-496a-9c8b-12c906e724ec-metrics-certs\") pod \"controller-6968d8fdc4-6dhf9\" (UID: \"6b667c31-e911-496a-9c8b-12c906e724ec\") " pod="metallb-system/controller-6968d8fdc4-6dhf9" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.407481 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-sockets\" (UniqueName: \"kubernetes.io/empty-dir/b1f214e9-14db-462f-900c-3652ec7908e5-frr-sockets\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.407607 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"reloader\" (UniqueName: \"kubernetes.io/empty-dir/b1f214e9-14db-462f-900c-3652ec7908e5-reloader\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.407781 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"frr-startup\" (UniqueName: \"kubernetes.io/configmap/b1f214e9-14db-462f-900c-3652ec7908e5-frr-startup\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.419682 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/b1f214e9-14db-462f-900c-3652ec7908e5-metrics-certs\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.432885 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zkvz6\" (UniqueName: \"kubernetes.io/projected/b1f214e9-14db-462f-900c-3652ec7908e5-kube-api-access-zkvz6\") pod \"frr-k8s-9z9n2\" (UID: \"b1f214e9-14db-462f-900c-3652ec7908e5\") " pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.507864 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-92mdn\" (UniqueName: \"kubernetes.io/projected/94655b12-be6a-4043-8f7c-80d1b7fb1a2f-kube-api-access-92mdn\") pod \"frr-k8s-webhook-server-7df86c4f6c-qf86j\" (UID: \"94655b12-be6a-4043-8f7c-80d1b7fb1a2f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.507907 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tjmxk\" (UniqueName: \"kubernetes.io/projected/6b667c31-e911-496a-9c8b-12c906e724ec-kube-api-access-tjmxk\") pod \"controller-6968d8fdc4-6dhf9\" (UID: \"6b667c31-e911-496a-9c8b-12c906e724ec\") " pod="metallb-system/controller-6968d8fdc4-6dhf9" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.507923 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b667c31-e911-496a-9c8b-12c906e724ec-metrics-certs\") pod \"controller-6968d8fdc4-6dhf9\" (UID: \"6b667c31-e911-496a-9c8b-12c906e724ec\") " pod="metallb-system/controller-6968d8fdc4-6dhf9" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.507942 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b667c31-e911-496a-9c8b-12c906e724ec-cert\") pod \"controller-6968d8fdc4-6dhf9\" (UID: \"6b667c31-e911-496a-9c8b-12c906e724ec\") " pod="metallb-system/controller-6968d8fdc4-6dhf9" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.507965 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-memberlist\") pod \"speaker-qv6gh\" (UID: \"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc\") " pod="metallb-system/speaker-qv6gh" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.507987 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-metallb-excludel2\") pod \"speaker-qv6gh\" (UID: \"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc\") " pod="metallb-system/speaker-qv6gh" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.508005 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94655b12-be6a-4043-8f7c-80d1b7fb1a2f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qf86j\" (UID: \"94655b12-be6a-4043-8f7c-80d1b7fb1a2f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.508026 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-metrics-certs\") pod \"speaker-qv6gh\" (UID: \"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc\") " pod="metallb-system/speaker-qv6gh" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.508047 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-prsbb\" (UniqueName: \"kubernetes.io/projected/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-kube-api-access-prsbb\") pod \"speaker-qv6gh\" (UID: \"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc\") " pod="metallb-system/speaker-qv6gh" Jan 31 14:53:45 crc kubenswrapper[4751]: E0131 14:53:45.508228 4751 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 14:53:45 crc kubenswrapper[4751]: E0131 14:53:45.508286 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-memberlist podName:7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc nodeName:}" failed. No retries permitted until 2026-01-31 14:53:46.008266003 +0000 UTC m=+730.382978888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-memberlist") pod "speaker-qv6gh" (UID: "7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc") : secret "metallb-memberlist" not found Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.508747 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metallb-excludel2\" (UniqueName: \"kubernetes.io/configmap/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-metallb-excludel2\") pod \"speaker-qv6gh\" (UID: \"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc\") " pod="metallb-system/speaker-qv6gh" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.525755 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-metrics-certs\") pod \"speaker-qv6gh\" (UID: \"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc\") " pod="metallb-system/speaker-qv6gh" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.525911 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/94655b12-be6a-4043-8f7c-80d1b7fb1a2f-cert\") pod \"frr-k8s-webhook-server-7df86c4f6c-qf86j\" (UID: \"94655b12-be6a-4043-8f7c-80d1b7fb1a2f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.525946 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/6b667c31-e911-496a-9c8b-12c906e724ec-metrics-certs\") pod \"controller-6968d8fdc4-6dhf9\" (UID: \"6b667c31-e911-496a-9c8b-12c906e724ec\") " pod="metallb-system/controller-6968d8fdc4-6dhf9" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.526058 4751 reflector.go:368] Caches populated for *v1.Secret from object-"metallb-system"/"metallb-webhook-cert" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.529876 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-prsbb\" (UniqueName: \"kubernetes.io/projected/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-kube-api-access-prsbb\") pod \"speaker-qv6gh\" (UID: \"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc\") " pod="metallb-system/speaker-qv6gh" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.530130 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6b667c31-e911-496a-9c8b-12c906e724ec-cert\") pod \"controller-6968d8fdc4-6dhf9\" (UID: \"6b667c31-e911-496a-9c8b-12c906e724ec\") " pod="metallb-system/controller-6968d8fdc4-6dhf9" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.532264 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-92mdn\" (UniqueName: \"kubernetes.io/projected/94655b12-be6a-4043-8f7c-80d1b7fb1a2f-kube-api-access-92mdn\") pod \"frr-k8s-webhook-server-7df86c4f6c-qf86j\" (UID: \"94655b12-be6a-4043-8f7c-80d1b7fb1a2f\") " pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.532301 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tjmxk\" (UniqueName: \"kubernetes.io/projected/6b667c31-e911-496a-9c8b-12c906e724ec-kube-api-access-tjmxk\") pod \"controller-6968d8fdc4-6dhf9\" (UID: \"6b667c31-e911-496a-9c8b-12c906e724ec\") " pod="metallb-system/controller-6968d8fdc4-6dhf9" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.571587 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.576305 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.670763 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/controller-6968d8fdc4-6dhf9" Jan 31 14:53:45 crc kubenswrapper[4751]: I0131 14:53:45.891209 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/controller-6968d8fdc4-6dhf9"] Jan 31 14:53:45 crc kubenswrapper[4751]: W0131 14:53:45.896215 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6b667c31_e911_496a_9c8b_12c906e724ec.slice/crio-d4d116ee8dafcae9c090310f10d6ed4f9123268eef0738f0033c39ed2c3f8b2b WatchSource:0}: Error finding container d4d116ee8dafcae9c090310f10d6ed4f9123268eef0738f0033c39ed2c3f8b2b: Status 404 returned error can't find the container with id d4d116ee8dafcae9c090310f10d6ed4f9123268eef0738f0033c39ed2c3f8b2b Jan 31 14:53:46 crc kubenswrapper[4751]: I0131 14:53:46.039218 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-memberlist\") pod \"speaker-qv6gh\" (UID: \"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc\") " pod="metallb-system/speaker-qv6gh" Jan 31 14:53:46 crc kubenswrapper[4751]: E0131 14:53:46.039375 4751 secret.go:188] Couldn't get secret metallb-system/metallb-memberlist: secret "metallb-memberlist" not found Jan 31 14:53:46 crc kubenswrapper[4751]: E0131 14:53:46.039593 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-memberlist podName:7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc nodeName:}" failed. No retries permitted until 2026-01-31 14:53:47.03957607 +0000 UTC m=+731.414288955 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "memberlist" (UniqueName: "kubernetes.io/secret/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-memberlist") pod "speaker-qv6gh" (UID: "7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc") : secret "metallb-memberlist" not found Jan 31 14:53:46 crc kubenswrapper[4751]: I0131 14:53:46.053426 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j"] Jan 31 14:53:46 crc kubenswrapper[4751]: W0131 14:53:46.056764 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod94655b12_be6a_4043_8f7c_80d1b7fb1a2f.slice/crio-389496d9322ade055f02884ba64abb01cdde5b6cd775d1f30ed0ee2c3cb8a7ae WatchSource:0}: Error finding container 389496d9322ade055f02884ba64abb01cdde5b6cd775d1f30ed0ee2c3cb8a7ae: Status 404 returned error can't find the container with id 389496d9322ade055f02884ba64abb01cdde5b6cd775d1f30ed0ee2c3cb8a7ae Jan 31 14:53:46 crc kubenswrapper[4751]: I0131 14:53:46.765316 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9z9n2" event={"ID":"b1f214e9-14db-462f-900c-3652ec7908e5","Type":"ContainerStarted","Data":"518736c27fabd294ea93ad285af465864a682c0d8cb72298fefc50f24acee05b"} Jan 31 14:53:46 crc kubenswrapper[4751]: I0131 14:53:46.767383 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6dhf9" event={"ID":"6b667c31-e911-496a-9c8b-12c906e724ec","Type":"ContainerStarted","Data":"a8b15329d8728f30cc4260b6a3385271f981350e73f6b2e23267cc83aae5ba6f"} Jan 31 14:53:46 crc kubenswrapper[4751]: I0131 14:53:46.767846 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6dhf9" event={"ID":"6b667c31-e911-496a-9c8b-12c906e724ec","Type":"ContainerStarted","Data":"d4d116ee8dafcae9c090310f10d6ed4f9123268eef0738f0033c39ed2c3f8b2b"} Jan 31 14:53:46 crc kubenswrapper[4751]: I0131 14:53:46.769229 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j" event={"ID":"94655b12-be6a-4043-8f7c-80d1b7fb1a2f","Type":"ContainerStarted","Data":"389496d9322ade055f02884ba64abb01cdde5b6cd775d1f30ed0ee2c3cb8a7ae"} Jan 31 14:53:47 crc kubenswrapper[4751]: I0131 14:53:47.051687 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-memberlist\") pod \"speaker-qv6gh\" (UID: \"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc\") " pod="metallb-system/speaker-qv6gh" Jan 31 14:53:47 crc kubenswrapper[4751]: I0131 14:53:47.059409 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"memberlist\" (UniqueName: \"kubernetes.io/secret/7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc-memberlist\") pod \"speaker-qv6gh\" (UID: \"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc\") " pod="metallb-system/speaker-qv6gh" Jan 31 14:53:47 crc kubenswrapper[4751]: I0131 14:53:47.154203 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="metallb-system/speaker-qv6gh" Jan 31 14:53:47 crc kubenswrapper[4751]: I0131 14:53:47.792559 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qv6gh" event={"ID":"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc","Type":"ContainerStarted","Data":"190173bc608108504a2a8a915f84497adc2dbddcc38e89ac61add1a81967cf73"} Jan 31 14:53:47 crc kubenswrapper[4751]: I0131 14:53:47.792603 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qv6gh" event={"ID":"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc","Type":"ContainerStarted","Data":"4ab8a47082d2773e0e4484534cf763c30b1a72d1b83d66d59262f5fa02591767"} Jan 31 14:53:49 crc kubenswrapper[4751]: I0131 14:53:49.807499 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/controller-6968d8fdc4-6dhf9" event={"ID":"6b667c31-e911-496a-9c8b-12c906e724ec","Type":"ContainerStarted","Data":"8bb42588577eb54f01b6d5e02179bed292d5a8faa442ca4ef9d8daa015180af1"} Jan 31 14:53:49 crc kubenswrapper[4751]: I0131 14:53:49.809249 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/controller-6968d8fdc4-6dhf9" Jan 31 14:53:49 crc kubenswrapper[4751]: I0131 14:53:49.815817 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/speaker-qv6gh" event={"ID":"7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc","Type":"ContainerStarted","Data":"29c7b873bfa2415d9deba65b7ff9c29b6493af1d4c749b28d36a4b6c43e9814c"} Jan 31 14:53:49 crc kubenswrapper[4751]: I0131 14:53:49.815913 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/speaker-qv6gh" Jan 31 14:53:49 crc kubenswrapper[4751]: I0131 14:53:49.826650 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/controller-6968d8fdc4-6dhf9" podStartSLOduration=1.857049602 podStartE2EDuration="4.826629189s" podCreationTimestamp="2026-01-31 14:53:45 +0000 UTC" firstStartedPulling="2026-01-31 14:53:46.011237893 +0000 UTC m=+730.385950778" lastFinishedPulling="2026-01-31 14:53:48.98081748 +0000 UTC m=+733.355530365" observedRunningTime="2026-01-31 14:53:49.823713332 +0000 UTC m=+734.198426217" watchObservedRunningTime="2026-01-31 14:53:49.826629189 +0000 UTC m=+734.201342074" Jan 31 14:53:49 crc kubenswrapper[4751]: I0131 14:53:49.846842 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/speaker-qv6gh" podStartSLOduration=3.432028772 podStartE2EDuration="4.846810691s" podCreationTimestamp="2026-01-31 14:53:45 +0000 UTC" firstStartedPulling="2026-01-31 14:53:47.57584837 +0000 UTC m=+731.950561255" lastFinishedPulling="2026-01-31 14:53:48.990630289 +0000 UTC m=+733.365343174" observedRunningTime="2026-01-31 14:53:49.845387913 +0000 UTC m=+734.220100838" watchObservedRunningTime="2026-01-31 14:53:49.846810691 +0000 UTC m=+734.221523616" Jan 31 14:53:52 crc kubenswrapper[4751]: I0131 14:53:52.835931 4751 generic.go:334] "Generic (PLEG): container finished" podID="b1f214e9-14db-462f-900c-3652ec7908e5" containerID="e257b83e2996505f76b5e69df401be892c8d5b5e9f72784f8bf893f34751a7e9" exitCode=0 Jan 31 14:53:52 crc kubenswrapper[4751]: I0131 14:53:52.836126 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9z9n2" event={"ID":"b1f214e9-14db-462f-900c-3652ec7908e5","Type":"ContainerDied","Data":"e257b83e2996505f76b5e69df401be892c8d5b5e9f72784f8bf893f34751a7e9"} Jan 31 14:53:52 crc kubenswrapper[4751]: I0131 14:53:52.840531 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j" event={"ID":"94655b12-be6a-4043-8f7c-80d1b7fb1a2f","Type":"ContainerStarted","Data":"5574e9a95192d2c8caf42a417e367b5c74a01a96dca3b6ace3a0b7e862064801"} Jan 31 14:53:52 crc kubenswrapper[4751]: I0131 14:53:52.841115 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j" Jan 31 14:53:52 crc kubenswrapper[4751]: I0131 14:53:52.895110 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j" podStartSLOduration=1.454432017 podStartE2EDuration="7.895059953s" podCreationTimestamp="2026-01-31 14:53:45 +0000 UTC" firstStartedPulling="2026-01-31 14:53:46.059196557 +0000 UTC m=+730.433909452" lastFinishedPulling="2026-01-31 14:53:52.499824503 +0000 UTC m=+736.874537388" observedRunningTime="2026-01-31 14:53:52.883288222 +0000 UTC m=+737.258001137" watchObservedRunningTime="2026-01-31 14:53:52.895059953 +0000 UTC m=+737.269772878" Jan 31 14:53:53 crc kubenswrapper[4751]: I0131 14:53:53.854340 4751 generic.go:334] "Generic (PLEG): container finished" podID="b1f214e9-14db-462f-900c-3652ec7908e5" containerID="b98b40b2437353e75bd1003d04e4148c2446d403f633b173d2a155e2c2d07298" exitCode=0 Jan 31 14:53:53 crc kubenswrapper[4751]: I0131 14:53:53.854416 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9z9n2" event={"ID":"b1f214e9-14db-462f-900c-3652ec7908e5","Type":"ContainerDied","Data":"b98b40b2437353e75bd1003d04e4148c2446d403f633b173d2a155e2c2d07298"} Jan 31 14:53:54 crc kubenswrapper[4751]: I0131 14:53:54.864179 4751 generic.go:334] "Generic (PLEG): container finished" podID="b1f214e9-14db-462f-900c-3652ec7908e5" containerID="6828715cee1ab3682671ec05297551484dd98c12c6fd0285752d41b1067aee54" exitCode=0 Jan 31 14:53:54 crc kubenswrapper[4751]: I0131 14:53:54.867145 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9z9n2" event={"ID":"b1f214e9-14db-462f-900c-3652ec7908e5","Type":"ContainerDied","Data":"6828715cee1ab3682671ec05297551484dd98c12c6fd0285752d41b1067aee54"} Jan 31 14:53:55 crc kubenswrapper[4751]: I0131 14:53:55.678267 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/controller-6968d8fdc4-6dhf9" Jan 31 14:53:55 crc kubenswrapper[4751]: I0131 14:53:55.879134 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9z9n2" event={"ID":"b1f214e9-14db-462f-900c-3652ec7908e5","Type":"ContainerStarted","Data":"b46a14e9de29292256b412db85b552467e7f108925d938e36716fa0bdd0d2eff"} Jan 31 14:53:55 crc kubenswrapper[4751]: I0131 14:53:55.879174 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9z9n2" event={"ID":"b1f214e9-14db-462f-900c-3652ec7908e5","Type":"ContainerStarted","Data":"1f010f84732b788fb5ace83d817423b09432c5953343ba03b1d5e092938cfc31"} Jan 31 14:53:55 crc kubenswrapper[4751]: I0131 14:53:55.879185 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9z9n2" event={"ID":"b1f214e9-14db-462f-900c-3652ec7908e5","Type":"ContainerStarted","Data":"1e02ed482355326b14499113df95495187e4101200954abfc593f8342f579635"} Jan 31 14:53:55 crc kubenswrapper[4751]: I0131 14:53:55.879197 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9z9n2" event={"ID":"b1f214e9-14db-462f-900c-3652ec7908e5","Type":"ContainerStarted","Data":"305700feab0f0c008275b1522e6cf82c889e25a0932e5d9125d079744f803262"} Jan 31 14:53:55 crc kubenswrapper[4751]: I0131 14:53:55.879206 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9z9n2" event={"ID":"b1f214e9-14db-462f-900c-3652ec7908e5","Type":"ContainerStarted","Data":"a06051de686fba3f136e364fbce12cfafe61eff286ab01c7331656ea2bea5ca4"} Jan 31 14:53:56 crc kubenswrapper[4751]: I0131 14:53:56.885815 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="metallb-system/frr-k8s-9z9n2" event={"ID":"b1f214e9-14db-462f-900c-3652ec7908e5","Type":"ContainerStarted","Data":"56b997727b93f490fefed1daa5c38fe367d74cc0f7af37afe6e03a931ce56c3e"} Jan 31 14:53:56 crc kubenswrapper[4751]: I0131 14:53:56.886278 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:53:56 crc kubenswrapper[4751]: I0131 14:53:56.915330 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="metallb-system/frr-k8s-9z9n2" podStartSLOduration=5.194901807 podStartE2EDuration="11.915310029s" podCreationTimestamp="2026-01-31 14:53:45 +0000 UTC" firstStartedPulling="2026-01-31 14:53:45.763796509 +0000 UTC m=+730.138509394" lastFinishedPulling="2026-01-31 14:53:52.484204721 +0000 UTC m=+736.858917616" observedRunningTime="2026-01-31 14:53:56.911189251 +0000 UTC m=+741.285902146" watchObservedRunningTime="2026-01-31 14:53:56.915310029 +0000 UTC m=+741.290022914" Jan 31 14:53:57 crc kubenswrapper[4751]: I0131 14:53:57.160016 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/speaker-qv6gh" Jan 31 14:54:00 crc kubenswrapper[4751]: I0131 14:54:00.571887 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:54:00 crc kubenswrapper[4751]: I0131 14:54:00.639609 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:54:02 crc kubenswrapper[4751]: I0131 14:54:02.591308 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-8khjf"] Jan 31 14:54:02 crc kubenswrapper[4751]: I0131 14:54:02.593491 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-8khjf" Jan 31 14:54:02 crc kubenswrapper[4751]: I0131 14:54:02.596469 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"openshift-service-ca.crt" Jan 31 14:54:02 crc kubenswrapper[4751]: I0131 14:54:02.596824 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-index-dockercfg-rlk9f" Jan 31 14:54:02 crc kubenswrapper[4751]: I0131 14:54:02.596891 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openstack-operators"/"kube-root-ca.crt" Jan 31 14:54:02 crc kubenswrapper[4751]: I0131 14:54:02.613720 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-8khjf"] Jan 31 14:54:02 crc kubenswrapper[4751]: I0131 14:54:02.711524 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ct4t5\" (UniqueName: \"kubernetes.io/projected/45186c12-b6c6-4360-91c6-f44b7a20835c-kube-api-access-ct4t5\") pod \"mariadb-operator-index-8khjf\" (UID: \"45186c12-b6c6-4360-91c6-f44b7a20835c\") " pod="openstack-operators/mariadb-operator-index-8khjf" Jan 31 14:54:02 crc kubenswrapper[4751]: I0131 14:54:02.813287 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ct4t5\" (UniqueName: \"kubernetes.io/projected/45186c12-b6c6-4360-91c6-f44b7a20835c-kube-api-access-ct4t5\") pod \"mariadb-operator-index-8khjf\" (UID: \"45186c12-b6c6-4360-91c6-f44b7a20835c\") " pod="openstack-operators/mariadb-operator-index-8khjf" Jan 31 14:54:02 crc kubenswrapper[4751]: I0131 14:54:02.833026 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ct4t5\" (UniqueName: \"kubernetes.io/projected/45186c12-b6c6-4360-91c6-f44b7a20835c-kube-api-access-ct4t5\") pod \"mariadb-operator-index-8khjf\" (UID: \"45186c12-b6c6-4360-91c6-f44b7a20835c\") " pod="openstack-operators/mariadb-operator-index-8khjf" Jan 31 14:54:02 crc kubenswrapper[4751]: I0131 14:54:02.940095 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-8khjf" Jan 31 14:54:03 crc kubenswrapper[4751]: I0131 14:54:03.168703 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-8khjf"] Jan 31 14:54:03 crc kubenswrapper[4751]: W0131 14:54:03.182280 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod45186c12_b6c6_4360_91c6_f44b7a20835c.slice/crio-df12732ce1dcb1caf73b1304f1223fbf619c3f1573236232e36e405a77fa3ed6 WatchSource:0}: Error finding container df12732ce1dcb1caf73b1304f1223fbf619c3f1573236232e36e405a77fa3ed6: Status 404 returned error can't find the container with id df12732ce1dcb1caf73b1304f1223fbf619c3f1573236232e36e405a77fa3ed6 Jan 31 14:54:03 crc kubenswrapper[4751]: I0131 14:54:03.932129 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-8khjf" event={"ID":"45186c12-b6c6-4360-91c6-f44b7a20835c","Type":"ContainerStarted","Data":"df12732ce1dcb1caf73b1304f1223fbf619c3f1573236232e36e405a77fa3ed6"} Jan 31 14:54:04 crc kubenswrapper[4751]: I0131 14:54:04.943232 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-8khjf" event={"ID":"45186c12-b6c6-4360-91c6-f44b7a20835c","Type":"ContainerStarted","Data":"6676f9025b980659e32fdcadd8ff687e74b4114ecb7fb9a20f0836497a011c64"} Jan 31 14:54:04 crc kubenswrapper[4751]: I0131 14:54:04.964566 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-8khjf" podStartSLOduration=2.052181049 podStartE2EDuration="2.964535743s" podCreationTimestamp="2026-01-31 14:54:02 +0000 UTC" firstStartedPulling="2026-01-31 14:54:03.184855244 +0000 UTC m=+747.559568129" lastFinishedPulling="2026-01-31 14:54:04.097209918 +0000 UTC m=+748.471922823" observedRunningTime="2026-01-31 14:54:04.959673655 +0000 UTC m=+749.334386580" watchObservedRunningTime="2026-01-31 14:54:04.964535743 +0000 UTC m=+749.339248668" Jan 31 14:54:05 crc kubenswrapper[4751]: I0131 14:54:05.575837 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-9z9n2" Jan 31 14:54:05 crc kubenswrapper[4751]: I0131 14:54:05.583333 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="metallb-system/frr-k8s-webhook-server-7df86c4f6c-qf86j" Jan 31 14:54:05 crc kubenswrapper[4751]: I0131 14:54:05.964277 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-8khjf"] Jan 31 14:54:06 crc kubenswrapper[4751]: I0131 14:54:06.569768 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-index-lpshr"] Jan 31 14:54:06 crc kubenswrapper[4751]: I0131 14:54:06.570546 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-lpshr" Jan 31 14:54:06 crc kubenswrapper[4751]: I0131 14:54:06.582907 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-lpshr"] Jan 31 14:54:06 crc kubenswrapper[4751]: I0131 14:54:06.675612 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hltq9\" (UniqueName: \"kubernetes.io/projected/11fab5ff-3041-45d3-8aab-29e25ed8c6ae-kube-api-access-hltq9\") pod \"mariadb-operator-index-lpshr\" (UID: \"11fab5ff-3041-45d3-8aab-29e25ed8c6ae\") " pod="openstack-operators/mariadb-operator-index-lpshr" Jan 31 14:54:06 crc kubenswrapper[4751]: I0131 14:54:06.776568 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hltq9\" (UniqueName: \"kubernetes.io/projected/11fab5ff-3041-45d3-8aab-29e25ed8c6ae-kube-api-access-hltq9\") pod \"mariadb-operator-index-lpshr\" (UID: \"11fab5ff-3041-45d3-8aab-29e25ed8c6ae\") " pod="openstack-operators/mariadb-operator-index-lpshr" Jan 31 14:54:06 crc kubenswrapper[4751]: I0131 14:54:06.826751 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hltq9\" (UniqueName: \"kubernetes.io/projected/11fab5ff-3041-45d3-8aab-29e25ed8c6ae-kube-api-access-hltq9\") pod \"mariadb-operator-index-lpshr\" (UID: \"11fab5ff-3041-45d3-8aab-29e25ed8c6ae\") " pod="openstack-operators/mariadb-operator-index-lpshr" Jan 31 14:54:06 crc kubenswrapper[4751]: I0131 14:54:06.894233 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-lpshr" Jan 31 14:54:06 crc kubenswrapper[4751]: I0131 14:54:06.957470 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-8khjf" podUID="45186c12-b6c6-4360-91c6-f44b7a20835c" containerName="registry-server" containerID="cri-o://6676f9025b980659e32fdcadd8ff687e74b4114ecb7fb9a20f0836497a011c64" gracePeriod=2 Jan 31 14:54:07 crc kubenswrapper[4751]: I0131 14:54:07.106382 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-index-lpshr"] Jan 31 14:54:07 crc kubenswrapper[4751]: W0131 14:54:07.116259 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod11fab5ff_3041_45d3_8aab_29e25ed8c6ae.slice/crio-713674d4d326d8545cf66e50aee47bded08afa8e12a04a763f8540e3552c31dd WatchSource:0}: Error finding container 713674d4d326d8545cf66e50aee47bded08afa8e12a04a763f8540e3552c31dd: Status 404 returned error can't find the container with id 713674d4d326d8545cf66e50aee47bded08afa8e12a04a763f8540e3552c31dd Jan 31 14:54:07 crc kubenswrapper[4751]: I0131 14:54:07.965598 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-lpshr" event={"ID":"11fab5ff-3041-45d3-8aab-29e25ed8c6ae","Type":"ContainerStarted","Data":"713674d4d326d8545cf66e50aee47bded08afa8e12a04a763f8540e3552c31dd"} Jan 31 14:54:07 crc kubenswrapper[4751]: I0131 14:54:07.967715 4751 generic.go:334] "Generic (PLEG): container finished" podID="45186c12-b6c6-4360-91c6-f44b7a20835c" containerID="6676f9025b980659e32fdcadd8ff687e74b4114ecb7fb9a20f0836497a011c64" exitCode=0 Jan 31 14:54:07 crc kubenswrapper[4751]: I0131 14:54:07.967780 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-8khjf" event={"ID":"45186c12-b6c6-4360-91c6-f44b7a20835c","Type":"ContainerDied","Data":"6676f9025b980659e32fdcadd8ff687e74b4114ecb7fb9a20f0836497a011c64"} Jan 31 14:54:08 crc kubenswrapper[4751]: I0131 14:54:08.358162 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-8khjf" Jan 31 14:54:08 crc kubenswrapper[4751]: I0131 14:54:08.423528 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ct4t5\" (UniqueName: \"kubernetes.io/projected/45186c12-b6c6-4360-91c6-f44b7a20835c-kube-api-access-ct4t5\") pod \"45186c12-b6c6-4360-91c6-f44b7a20835c\" (UID: \"45186c12-b6c6-4360-91c6-f44b7a20835c\") " Jan 31 14:54:08 crc kubenswrapper[4751]: I0131 14:54:08.431603 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45186c12-b6c6-4360-91c6-f44b7a20835c-kube-api-access-ct4t5" (OuterVolumeSpecName: "kube-api-access-ct4t5") pod "45186c12-b6c6-4360-91c6-f44b7a20835c" (UID: "45186c12-b6c6-4360-91c6-f44b7a20835c"). InnerVolumeSpecName "kube-api-access-ct4t5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:54:08 crc kubenswrapper[4751]: I0131 14:54:08.525199 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ct4t5\" (UniqueName: \"kubernetes.io/projected/45186c12-b6c6-4360-91c6-f44b7a20835c-kube-api-access-ct4t5\") on node \"crc\" DevicePath \"\"" Jan 31 14:54:08 crc kubenswrapper[4751]: I0131 14:54:08.976968 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-8khjf" Jan 31 14:54:08 crc kubenswrapper[4751]: I0131 14:54:08.976985 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-8khjf" event={"ID":"45186c12-b6c6-4360-91c6-f44b7a20835c","Type":"ContainerDied","Data":"df12732ce1dcb1caf73b1304f1223fbf619c3f1573236232e36e405a77fa3ed6"} Jan 31 14:54:08 crc kubenswrapper[4751]: I0131 14:54:08.977110 4751 scope.go:117] "RemoveContainer" containerID="6676f9025b980659e32fdcadd8ff687e74b4114ecb7fb9a20f0836497a011c64" Jan 31 14:54:08 crc kubenswrapper[4751]: I0131 14:54:08.979577 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-lpshr" event={"ID":"11fab5ff-3041-45d3-8aab-29e25ed8c6ae","Type":"ContainerStarted","Data":"ee8f26396384b77a167d32d6b25c5bda6d0d4b830d367daeecc1c7c591a2ef63"} Jan 31 14:54:09 crc kubenswrapper[4751]: I0131 14:54:09.019786 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-index-lpshr" podStartSLOduration=2.076689559 podStartE2EDuration="3.019761281s" podCreationTimestamp="2026-01-31 14:54:06 +0000 UTC" firstStartedPulling="2026-01-31 14:54:07.119237658 +0000 UTC m=+751.493950543" lastFinishedPulling="2026-01-31 14:54:08.06230934 +0000 UTC m=+752.437022265" observedRunningTime="2026-01-31 14:54:09.008693469 +0000 UTC m=+753.383406384" watchObservedRunningTime="2026-01-31 14:54:09.019761281 +0000 UTC m=+753.394474196" Jan 31 14:54:09 crc kubenswrapper[4751]: I0131 14:54:09.027682 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-8khjf"] Jan 31 14:54:09 crc kubenswrapper[4751]: I0131 14:54:09.033825 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-8khjf"] Jan 31 14:54:09 crc kubenswrapper[4751]: I0131 14:54:09.591758 4751 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 31 14:54:10 crc kubenswrapper[4751]: I0131 14:54:10.417443 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="45186c12-b6c6-4360-91c6-f44b7a20835c" path="/var/lib/kubelet/pods/45186c12-b6c6-4360-91c6-f44b7a20835c/volumes" Jan 31 14:54:16 crc kubenswrapper[4751]: I0131 14:54:16.894845 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-index-lpshr" Jan 31 14:54:16 crc kubenswrapper[4751]: I0131 14:54:16.895306 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/mariadb-operator-index-lpshr" Jan 31 14:54:16 crc kubenswrapper[4751]: I0131 14:54:16.937896 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/mariadb-operator-index-lpshr" Jan 31 14:54:17 crc kubenswrapper[4751]: I0131 14:54:17.084013 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-index-lpshr" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.207924 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q"] Jan 31 14:54:18 crc kubenswrapper[4751]: E0131 14:54:18.208444 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="45186c12-b6c6-4360-91c6-f44b7a20835c" containerName="registry-server" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.208458 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="45186c12-b6c6-4360-91c6-f44b7a20835c" containerName="registry-server" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.208595 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="45186c12-b6c6-4360-91c6-f44b7a20835c" containerName="registry-server" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.209474 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.211690 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wxkjx" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.223276 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q"] Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.290447 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/667a6cec-bf73-4340-9be6-f4bc10182004-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q\" (UID: \"667a6cec-bf73-4340-9be6-f4bc10182004\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.290505 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bwwbj\" (UniqueName: \"kubernetes.io/projected/667a6cec-bf73-4340-9be6-f4bc10182004-kube-api-access-bwwbj\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q\" (UID: \"667a6cec-bf73-4340-9be6-f4bc10182004\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.290613 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/667a6cec-bf73-4340-9be6-f4bc10182004-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q\" (UID: \"667a6cec-bf73-4340-9be6-f4bc10182004\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.391973 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/667a6cec-bf73-4340-9be6-f4bc10182004-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q\" (UID: \"667a6cec-bf73-4340-9be6-f4bc10182004\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.392036 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bwwbj\" (UniqueName: \"kubernetes.io/projected/667a6cec-bf73-4340-9be6-f4bc10182004-kube-api-access-bwwbj\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q\" (UID: \"667a6cec-bf73-4340-9be6-f4bc10182004\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.392195 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/667a6cec-bf73-4340-9be6-f4bc10182004-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q\" (UID: \"667a6cec-bf73-4340-9be6-f4bc10182004\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.392623 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/667a6cec-bf73-4340-9be6-f4bc10182004-util\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q\" (UID: \"667a6cec-bf73-4340-9be6-f4bc10182004\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.392946 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/667a6cec-bf73-4340-9be6-f4bc10182004-bundle\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q\" (UID: \"667a6cec-bf73-4340-9be6-f4bc10182004\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.426329 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bwwbj\" (UniqueName: \"kubernetes.io/projected/667a6cec-bf73-4340-9be6-f4bc10182004-kube-api-access-bwwbj\") pod \"f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q\" (UID: \"667a6cec-bf73-4340-9be6-f4bc10182004\") " pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.539587 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" Jan 31 14:54:18 crc kubenswrapper[4751]: I0131 14:54:18.785755 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q"] Jan 31 14:54:18 crc kubenswrapper[4751]: W0131 14:54:18.790322 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod667a6cec_bf73_4340_9be6_f4bc10182004.slice/crio-d7229488283ac7b8dda0826471cb965309cf1c47e2f8ba0d50557f1751121c0d WatchSource:0}: Error finding container d7229488283ac7b8dda0826471cb965309cf1c47e2f8ba0d50557f1751121c0d: Status 404 returned error can't find the container with id d7229488283ac7b8dda0826471cb965309cf1c47e2f8ba0d50557f1751121c0d Jan 31 14:54:19 crc kubenswrapper[4751]: I0131 14:54:19.070550 4751 generic.go:334] "Generic (PLEG): container finished" podID="667a6cec-bf73-4340-9be6-f4bc10182004" containerID="8c99859db003b8960447da601e95711f7b0d1554d7ee22f9d6cb9490f3263093" exitCode=0 Jan 31 14:54:19 crc kubenswrapper[4751]: I0131 14:54:19.070604 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" event={"ID":"667a6cec-bf73-4340-9be6-f4bc10182004","Type":"ContainerDied","Data":"8c99859db003b8960447da601e95711f7b0d1554d7ee22f9d6cb9490f3263093"} Jan 31 14:54:19 crc kubenswrapper[4751]: I0131 14:54:19.070640 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" event={"ID":"667a6cec-bf73-4340-9be6-f4bc10182004","Type":"ContainerStarted","Data":"d7229488283ac7b8dda0826471cb965309cf1c47e2f8ba0d50557f1751121c0d"} Jan 31 14:54:20 crc kubenswrapper[4751]: I0131 14:54:20.078581 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" event={"ID":"667a6cec-bf73-4340-9be6-f4bc10182004","Type":"ContainerStarted","Data":"48b5fe15e6b5d08f52dd98326462e391e5fafbd3bd396d34d3c7b444efffa146"} Jan 31 14:54:21 crc kubenswrapper[4751]: I0131 14:54:21.084715 4751 generic.go:334] "Generic (PLEG): container finished" podID="667a6cec-bf73-4340-9be6-f4bc10182004" containerID="48b5fe15e6b5d08f52dd98326462e391e5fafbd3bd396d34d3c7b444efffa146" exitCode=0 Jan 31 14:54:21 crc kubenswrapper[4751]: I0131 14:54:21.084765 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" event={"ID":"667a6cec-bf73-4340-9be6-f4bc10182004","Type":"ContainerDied","Data":"48b5fe15e6b5d08f52dd98326462e391e5fafbd3bd396d34d3c7b444efffa146"} Jan 31 14:54:22 crc kubenswrapper[4751]: I0131 14:54:22.094883 4751 generic.go:334] "Generic (PLEG): container finished" podID="667a6cec-bf73-4340-9be6-f4bc10182004" containerID="1220529350d17a7bb750446818ec08ebb9bc079afd6ac80866f7fe1abd4f1db3" exitCode=0 Jan 31 14:54:22 crc kubenswrapper[4751]: I0131 14:54:22.094939 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" event={"ID":"667a6cec-bf73-4340-9be6-f4bc10182004","Type":"ContainerDied","Data":"1220529350d17a7bb750446818ec08ebb9bc079afd6ac80866f7fe1abd4f1db3"} Jan 31 14:54:23 crc kubenswrapper[4751]: I0131 14:54:23.366591 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" Jan 31 14:54:23 crc kubenswrapper[4751]: I0131 14:54:23.463476 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/667a6cec-bf73-4340-9be6-f4bc10182004-bundle\") pod \"667a6cec-bf73-4340-9be6-f4bc10182004\" (UID: \"667a6cec-bf73-4340-9be6-f4bc10182004\") " Jan 31 14:54:23 crc kubenswrapper[4751]: I0131 14:54:23.463602 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/667a6cec-bf73-4340-9be6-f4bc10182004-util\") pod \"667a6cec-bf73-4340-9be6-f4bc10182004\" (UID: \"667a6cec-bf73-4340-9be6-f4bc10182004\") " Jan 31 14:54:23 crc kubenswrapper[4751]: I0131 14:54:23.463672 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bwwbj\" (UniqueName: \"kubernetes.io/projected/667a6cec-bf73-4340-9be6-f4bc10182004-kube-api-access-bwwbj\") pod \"667a6cec-bf73-4340-9be6-f4bc10182004\" (UID: \"667a6cec-bf73-4340-9be6-f4bc10182004\") " Jan 31 14:54:23 crc kubenswrapper[4751]: I0131 14:54:23.464815 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/667a6cec-bf73-4340-9be6-f4bc10182004-bundle" (OuterVolumeSpecName: "bundle") pod "667a6cec-bf73-4340-9be6-f4bc10182004" (UID: "667a6cec-bf73-4340-9be6-f4bc10182004"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:54:23 crc kubenswrapper[4751]: I0131 14:54:23.469703 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/667a6cec-bf73-4340-9be6-f4bc10182004-kube-api-access-bwwbj" (OuterVolumeSpecName: "kube-api-access-bwwbj") pod "667a6cec-bf73-4340-9be6-f4bc10182004" (UID: "667a6cec-bf73-4340-9be6-f4bc10182004"). InnerVolumeSpecName "kube-api-access-bwwbj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:54:23 crc kubenswrapper[4751]: I0131 14:54:23.494301 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/667a6cec-bf73-4340-9be6-f4bc10182004-util" (OuterVolumeSpecName: "util") pod "667a6cec-bf73-4340-9be6-f4bc10182004" (UID: "667a6cec-bf73-4340-9be6-f4bc10182004"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:54:23 crc kubenswrapper[4751]: I0131 14:54:23.565625 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/667a6cec-bf73-4340-9be6-f4bc10182004-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:54:23 crc kubenswrapper[4751]: I0131 14:54:23.565675 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/667a6cec-bf73-4340-9be6-f4bc10182004-util\") on node \"crc\" DevicePath \"\"" Jan 31 14:54:23 crc kubenswrapper[4751]: I0131 14:54:23.565695 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bwwbj\" (UniqueName: \"kubernetes.io/projected/667a6cec-bf73-4340-9be6-f4bc10182004-kube-api-access-bwwbj\") on node \"crc\" DevicePath \"\"" Jan 31 14:54:24 crc kubenswrapper[4751]: I0131 14:54:24.112369 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" event={"ID":"667a6cec-bf73-4340-9be6-f4bc10182004","Type":"ContainerDied","Data":"d7229488283ac7b8dda0826471cb965309cf1c47e2f8ba0d50557f1751121c0d"} Jan 31 14:54:24 crc kubenswrapper[4751]: I0131 14:54:24.112442 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d7229488283ac7b8dda0826471cb965309cf1c47e2f8ba0d50557f1751121c0d" Jan 31 14:54:24 crc kubenswrapper[4751]: I0131 14:54:24.112936 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.433163 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn"] Jan 31 14:54:31 crc kubenswrapper[4751]: E0131 14:54:31.433920 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667a6cec-bf73-4340-9be6-f4bc10182004" containerName="pull" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.433936 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="667a6cec-bf73-4340-9be6-f4bc10182004" containerName="pull" Jan 31 14:54:31 crc kubenswrapper[4751]: E0131 14:54:31.433957 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667a6cec-bf73-4340-9be6-f4bc10182004" containerName="util" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.433965 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="667a6cec-bf73-4340-9be6-f4bc10182004" containerName="util" Jan 31 14:54:31 crc kubenswrapper[4751]: E0131 14:54:31.433974 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="667a6cec-bf73-4340-9be6-f4bc10182004" containerName="extract" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.433983 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="667a6cec-bf73-4340-9be6-f4bc10182004" containerName="extract" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.434134 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="667a6cec-bf73-4340-9be6-f4bc10182004" containerName="extract" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.434565 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.436863 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-dockercfg-vt8x4" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.436984 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"webhook-server-cert" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.437507 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"mariadb-operator-controller-manager-service-cert" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.446609 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn"] Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.580495 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14df28b7-d7cb-466e-aa07-69e320d71620-apiservice-cert\") pod \"mariadb-operator-controller-manager-65848b4486-qb6hn\" (UID: \"14df28b7-d7cb-466e-aa07-69e320d71620\") " pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.580571 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5zcjm\" (UniqueName: \"kubernetes.io/projected/14df28b7-d7cb-466e-aa07-69e320d71620-kube-api-access-5zcjm\") pod \"mariadb-operator-controller-manager-65848b4486-qb6hn\" (UID: \"14df28b7-d7cb-466e-aa07-69e320d71620\") " pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.580609 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14df28b7-d7cb-466e-aa07-69e320d71620-webhook-cert\") pod \"mariadb-operator-controller-manager-65848b4486-qb6hn\" (UID: \"14df28b7-d7cb-466e-aa07-69e320d71620\") " pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.681980 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5zcjm\" (UniqueName: \"kubernetes.io/projected/14df28b7-d7cb-466e-aa07-69e320d71620-kube-api-access-5zcjm\") pod \"mariadb-operator-controller-manager-65848b4486-qb6hn\" (UID: \"14df28b7-d7cb-466e-aa07-69e320d71620\") " pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.682059 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14df28b7-d7cb-466e-aa07-69e320d71620-webhook-cert\") pod \"mariadb-operator-controller-manager-65848b4486-qb6hn\" (UID: \"14df28b7-d7cb-466e-aa07-69e320d71620\") " pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.682272 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14df28b7-d7cb-466e-aa07-69e320d71620-apiservice-cert\") pod \"mariadb-operator-controller-manager-65848b4486-qb6hn\" (UID: \"14df28b7-d7cb-466e-aa07-69e320d71620\") " pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.692117 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14df28b7-d7cb-466e-aa07-69e320d71620-webhook-cert\") pod \"mariadb-operator-controller-manager-65848b4486-qb6hn\" (UID: \"14df28b7-d7cb-466e-aa07-69e320d71620\") " pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.693360 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14df28b7-d7cb-466e-aa07-69e320d71620-apiservice-cert\") pod \"mariadb-operator-controller-manager-65848b4486-qb6hn\" (UID: \"14df28b7-d7cb-466e-aa07-69e320d71620\") " pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.727231 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5zcjm\" (UniqueName: \"kubernetes.io/projected/14df28b7-d7cb-466e-aa07-69e320d71620-kube-api-access-5zcjm\") pod \"mariadb-operator-controller-manager-65848b4486-qb6hn\" (UID: \"14df28b7-d7cb-466e-aa07-69e320d71620\") " pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" Jan 31 14:54:31 crc kubenswrapper[4751]: I0131 14:54:31.752548 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" Jan 31 14:54:32 crc kubenswrapper[4751]: I0131 14:54:32.066498 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn"] Jan 31 14:54:32 crc kubenswrapper[4751]: I0131 14:54:32.168428 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" event={"ID":"14df28b7-d7cb-466e-aa07-69e320d71620","Type":"ContainerStarted","Data":"8d7ddc4e6b1f882339c27c9bee06d6abc3c29498935b356f92bf581f66149e68"} Jan 31 14:54:36 crc kubenswrapper[4751]: I0131 14:54:36.205253 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" event={"ID":"14df28b7-d7cb-466e-aa07-69e320d71620","Type":"ContainerStarted","Data":"335b59bfcc7957562303125aed6f69b84b76a35a78ef760e49817204c373ec41"} Jan 31 14:54:36 crc kubenswrapper[4751]: I0131 14:54:36.205870 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" Jan 31 14:54:36 crc kubenswrapper[4751]: I0131 14:54:36.235749 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" podStartSLOduration=1.911500285 podStartE2EDuration="5.235715992s" podCreationTimestamp="2026-01-31 14:54:31 +0000 UTC" firstStartedPulling="2026-01-31 14:54:32.07448376 +0000 UTC m=+776.449196645" lastFinishedPulling="2026-01-31 14:54:35.398699467 +0000 UTC m=+779.773412352" observedRunningTime="2026-01-31 14:54:36.230192687 +0000 UTC m=+780.604905612" watchObservedRunningTime="2026-01-31 14:54:36.235715992 +0000 UTC m=+780.610428917" Jan 31 14:54:38 crc kubenswrapper[4751]: I0131 14:54:38.897051 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:54:38 crc kubenswrapper[4751]: I0131 14:54:38.897609 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:54:41 crc kubenswrapper[4751]: I0131 14:54:41.758146 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" Jan 31 14:54:46 crc kubenswrapper[4751]: I0131 14:54:46.441876 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-9b26d"] Jan 31 14:54:46 crc kubenswrapper[4751]: I0131 14:54:46.453223 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9b26d" Jan 31 14:54:46 crc kubenswrapper[4751]: I0131 14:54:46.484654 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-26444\" (UniqueName: \"kubernetes.io/projected/f24c1af7-b130-4ede-a7be-24aedb5c293b-kube-api-access-26444\") pod \"infra-operator-index-9b26d\" (UID: \"f24c1af7-b130-4ede-a7be-24aedb5c293b\") " pod="openstack-operators/infra-operator-index-9b26d" Jan 31 14:54:46 crc kubenswrapper[4751]: I0131 14:54:46.524443 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-index-dockercfg-2wdmz" Jan 31 14:54:46 crc kubenswrapper[4751]: I0131 14:54:46.528434 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-9b26d"] Jan 31 14:54:46 crc kubenswrapper[4751]: I0131 14:54:46.585517 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-26444\" (UniqueName: \"kubernetes.io/projected/f24c1af7-b130-4ede-a7be-24aedb5c293b-kube-api-access-26444\") pod \"infra-operator-index-9b26d\" (UID: \"f24c1af7-b130-4ede-a7be-24aedb5c293b\") " pod="openstack-operators/infra-operator-index-9b26d" Jan 31 14:54:46 crc kubenswrapper[4751]: I0131 14:54:46.769045 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-26444\" (UniqueName: \"kubernetes.io/projected/f24c1af7-b130-4ede-a7be-24aedb5c293b-kube-api-access-26444\") pod \"infra-operator-index-9b26d\" (UID: \"f24c1af7-b130-4ede-a7be-24aedb5c293b\") " pod="openstack-operators/infra-operator-index-9b26d" Jan 31 14:54:46 crc kubenswrapper[4751]: I0131 14:54:46.781499 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9b26d" Jan 31 14:54:47 crc kubenswrapper[4751]: I0131 14:54:47.265395 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-9b26d"] Jan 31 14:54:48 crc kubenswrapper[4751]: I0131 14:54:48.279911 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9b26d" event={"ID":"f24c1af7-b130-4ede-a7be-24aedb5c293b","Type":"ContainerStarted","Data":"8a29ac7a55371e1e5a88fa32404a31e8d6cddca143fb918552f1ab9d6575739d"} Jan 31 14:54:49 crc kubenswrapper[4751]: I0131 14:54:49.286979 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9b26d" event={"ID":"f24c1af7-b130-4ede-a7be-24aedb5c293b","Type":"ContainerStarted","Data":"0932a4c503ee3b0e04317220a8d546932e9688b754dc52c70da600000553bab7"} Jan 31 14:54:49 crc kubenswrapper[4751]: I0131 14:54:49.300129 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-9b26d" podStartSLOduration=1.9576949209999999 podStartE2EDuration="3.30004309s" podCreationTimestamp="2026-01-31 14:54:46 +0000 UTC" firstStartedPulling="2026-01-31 14:54:47.283165727 +0000 UTC m=+791.657878612" lastFinishedPulling="2026-01-31 14:54:48.625513896 +0000 UTC m=+793.000226781" observedRunningTime="2026-01-31 14:54:49.298650443 +0000 UTC m=+793.673363328" watchObservedRunningTime="2026-01-31 14:54:49.30004309 +0000 UTC m=+793.674755985" Jan 31 14:54:49 crc kubenswrapper[4751]: I0131 14:54:49.395639 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-9b26d"] Jan 31 14:54:50 crc kubenswrapper[4751]: I0131 14:54:50.017497 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-index-5tz82"] Jan 31 14:54:50 crc kubenswrapper[4751]: I0131 14:54:50.018942 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-5tz82" Jan 31 14:54:50 crc kubenswrapper[4751]: I0131 14:54:50.025356 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-5tz82"] Jan 31 14:54:50 crc kubenswrapper[4751]: I0131 14:54:50.165225 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rmd6m\" (UniqueName: \"kubernetes.io/projected/15539f33-874c-45ae-8ee2-7f821c54b267-kube-api-access-rmd6m\") pod \"infra-operator-index-5tz82\" (UID: \"15539f33-874c-45ae-8ee2-7f821c54b267\") " pod="openstack-operators/infra-operator-index-5tz82" Jan 31 14:54:50 crc kubenswrapper[4751]: I0131 14:54:50.266845 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rmd6m\" (UniqueName: \"kubernetes.io/projected/15539f33-874c-45ae-8ee2-7f821c54b267-kube-api-access-rmd6m\") pod \"infra-operator-index-5tz82\" (UID: \"15539f33-874c-45ae-8ee2-7f821c54b267\") " pod="openstack-operators/infra-operator-index-5tz82" Jan 31 14:54:50 crc kubenswrapper[4751]: I0131 14:54:50.303113 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rmd6m\" (UniqueName: \"kubernetes.io/projected/15539f33-874c-45ae-8ee2-7f821c54b267-kube-api-access-rmd6m\") pod \"infra-operator-index-5tz82\" (UID: \"15539f33-874c-45ae-8ee2-7f821c54b267\") " pod="openstack-operators/infra-operator-index-5tz82" Jan 31 14:54:50 crc kubenswrapper[4751]: I0131 14:54:50.391552 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-5tz82" Jan 31 14:54:50 crc kubenswrapper[4751]: I0131 14:54:50.828582 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-index-5tz82"] Jan 31 14:54:50 crc kubenswrapper[4751]: W0131 14:54:50.839337 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod15539f33_874c_45ae_8ee2_7f821c54b267.slice/crio-a32321b4d51d551ed7ea834004f3d66d0ba16e8c6d1b16cfe9fefade795fabc7 WatchSource:0}: Error finding container a32321b4d51d551ed7ea834004f3d66d0ba16e8c6d1b16cfe9fefade795fabc7: Status 404 returned error can't find the container with id a32321b4d51d551ed7ea834004f3d66d0ba16e8c6d1b16cfe9fefade795fabc7 Jan 31 14:54:51 crc kubenswrapper[4751]: I0131 14:54:51.308398 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-5tz82" event={"ID":"15539f33-874c-45ae-8ee2-7f821c54b267","Type":"ContainerStarted","Data":"a32321b4d51d551ed7ea834004f3d66d0ba16e8c6d1b16cfe9fefade795fabc7"} Jan 31 14:54:51 crc kubenswrapper[4751]: I0131 14:54:51.308966 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-9b26d" podUID="f24c1af7-b130-4ede-a7be-24aedb5c293b" containerName="registry-server" containerID="cri-o://0932a4c503ee3b0e04317220a8d546932e9688b754dc52c70da600000553bab7" gracePeriod=2 Jan 31 14:54:51 crc kubenswrapper[4751]: I0131 14:54:51.700518 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9b26d" Jan 31 14:54:51 crc kubenswrapper[4751]: I0131 14:54:51.786543 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26444\" (UniqueName: \"kubernetes.io/projected/f24c1af7-b130-4ede-a7be-24aedb5c293b-kube-api-access-26444\") pod \"f24c1af7-b130-4ede-a7be-24aedb5c293b\" (UID: \"f24c1af7-b130-4ede-a7be-24aedb5c293b\") " Jan 31 14:54:51 crc kubenswrapper[4751]: I0131 14:54:51.793917 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f24c1af7-b130-4ede-a7be-24aedb5c293b-kube-api-access-26444" (OuterVolumeSpecName: "kube-api-access-26444") pod "f24c1af7-b130-4ede-a7be-24aedb5c293b" (UID: "f24c1af7-b130-4ede-a7be-24aedb5c293b"). InnerVolumeSpecName "kube-api-access-26444". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:54:51 crc kubenswrapper[4751]: I0131 14:54:51.888550 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-26444\" (UniqueName: \"kubernetes.io/projected/f24c1af7-b130-4ede-a7be-24aedb5c293b-kube-api-access-26444\") on node \"crc\" DevicePath \"\"" Jan 31 14:54:52 crc kubenswrapper[4751]: I0131 14:54:52.316244 4751 generic.go:334] "Generic (PLEG): container finished" podID="f24c1af7-b130-4ede-a7be-24aedb5c293b" containerID="0932a4c503ee3b0e04317220a8d546932e9688b754dc52c70da600000553bab7" exitCode=0 Jan 31 14:54:52 crc kubenswrapper[4751]: I0131 14:54:52.316314 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9b26d" event={"ID":"f24c1af7-b130-4ede-a7be-24aedb5c293b","Type":"ContainerDied","Data":"0932a4c503ee3b0e04317220a8d546932e9688b754dc52c70da600000553bab7"} Jan 31 14:54:52 crc kubenswrapper[4751]: I0131 14:54:52.316341 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-9b26d" event={"ID":"f24c1af7-b130-4ede-a7be-24aedb5c293b","Type":"ContainerDied","Data":"8a29ac7a55371e1e5a88fa32404a31e8d6cddca143fb918552f1ab9d6575739d"} Jan 31 14:54:52 crc kubenswrapper[4751]: I0131 14:54:52.316334 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-9b26d" Jan 31 14:54:52 crc kubenswrapper[4751]: I0131 14:54:52.316391 4751 scope.go:117] "RemoveContainer" containerID="0932a4c503ee3b0e04317220a8d546932e9688b754dc52c70da600000553bab7" Jan 31 14:54:52 crc kubenswrapper[4751]: I0131 14:54:52.318351 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-5tz82" event={"ID":"15539f33-874c-45ae-8ee2-7f821c54b267","Type":"ContainerStarted","Data":"2880fdfb3f748e15e74bf089fd52f14043b007613c5d6a9fb47038e2c465f42a"} Jan 31 14:54:52 crc kubenswrapper[4751]: I0131 14:54:52.342240 4751 scope.go:117] "RemoveContainer" containerID="0932a4c503ee3b0e04317220a8d546932e9688b754dc52c70da600000553bab7" Jan 31 14:54:52 crc kubenswrapper[4751]: E0131 14:54:52.342830 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0932a4c503ee3b0e04317220a8d546932e9688b754dc52c70da600000553bab7\": container with ID starting with 0932a4c503ee3b0e04317220a8d546932e9688b754dc52c70da600000553bab7 not found: ID does not exist" containerID="0932a4c503ee3b0e04317220a8d546932e9688b754dc52c70da600000553bab7" Jan 31 14:54:52 crc kubenswrapper[4751]: I0131 14:54:52.342869 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0932a4c503ee3b0e04317220a8d546932e9688b754dc52c70da600000553bab7"} err="failed to get container status \"0932a4c503ee3b0e04317220a8d546932e9688b754dc52c70da600000553bab7\": rpc error: code = NotFound desc = could not find container \"0932a4c503ee3b0e04317220a8d546932e9688b754dc52c70da600000553bab7\": container with ID starting with 0932a4c503ee3b0e04317220a8d546932e9688b754dc52c70da600000553bab7 not found: ID does not exist" Jan 31 14:54:52 crc kubenswrapper[4751]: I0131 14:54:52.356493 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-index-5tz82" podStartSLOduration=2.92474711 podStartE2EDuration="3.356469785s" podCreationTimestamp="2026-01-31 14:54:49 +0000 UTC" firstStartedPulling="2026-01-31 14:54:50.843994633 +0000 UTC m=+795.218707548" lastFinishedPulling="2026-01-31 14:54:51.275717328 +0000 UTC m=+795.650430223" observedRunningTime="2026-01-31 14:54:52.342293721 +0000 UTC m=+796.717006626" watchObservedRunningTime="2026-01-31 14:54:52.356469785 +0000 UTC m=+796.731182680" Jan 31 14:54:52 crc kubenswrapper[4751]: I0131 14:54:52.360319 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-9b26d"] Jan 31 14:54:52 crc kubenswrapper[4751]: I0131 14:54:52.365230 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-9b26d"] Jan 31 14:54:52 crc kubenswrapper[4751]: I0131 14:54:52.413326 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f24c1af7-b130-4ede-a7be-24aedb5c293b" path="/var/lib/kubelet/pods/f24c1af7-b130-4ede-a7be-24aedb5c293b/volumes" Jan 31 14:55:00 crc kubenswrapper[4751]: I0131 14:55:00.392225 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-index-5tz82" Jan 31 14:55:00 crc kubenswrapper[4751]: I0131 14:55:00.392994 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/infra-operator-index-5tz82" Jan 31 14:55:00 crc kubenswrapper[4751]: I0131 14:55:00.437545 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/infra-operator-index-5tz82" Jan 31 14:55:01 crc kubenswrapper[4751]: I0131 14:55:01.401790 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-index-5tz82" Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.647456 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6"] Jan 31 14:55:02 crc kubenswrapper[4751]: E0131 14:55:02.648294 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f24c1af7-b130-4ede-a7be-24aedb5c293b" containerName="registry-server" Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.648326 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f24c1af7-b130-4ede-a7be-24aedb5c293b" containerName="registry-server" Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.648612 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f24c1af7-b130-4ede-a7be-24aedb5c293b" containerName="registry-server" Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.650432 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.656680 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6"] Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.657939 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wxkjx" Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.755763 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8l97j\" (UniqueName: \"kubernetes.io/projected/29a3b16f-f39d-413a-b623-3ac15aba50cf-kube-api-access-8l97j\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6\" (UID: \"29a3b16f-f39d-413a-b623-3ac15aba50cf\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.755841 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29a3b16f-f39d-413a-b623-3ac15aba50cf-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6\" (UID: \"29a3b16f-f39d-413a-b623-3ac15aba50cf\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.755869 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29a3b16f-f39d-413a-b623-3ac15aba50cf-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6\" (UID: \"29a3b16f-f39d-413a-b623-3ac15aba50cf\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.857184 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29a3b16f-f39d-413a-b623-3ac15aba50cf-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6\" (UID: \"29a3b16f-f39d-413a-b623-3ac15aba50cf\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.857223 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29a3b16f-f39d-413a-b623-3ac15aba50cf-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6\" (UID: \"29a3b16f-f39d-413a-b623-3ac15aba50cf\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.857278 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8l97j\" (UniqueName: \"kubernetes.io/projected/29a3b16f-f39d-413a-b623-3ac15aba50cf-kube-api-access-8l97j\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6\" (UID: \"29a3b16f-f39d-413a-b623-3ac15aba50cf\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.857818 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29a3b16f-f39d-413a-b623-3ac15aba50cf-util\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6\" (UID: \"29a3b16f-f39d-413a-b623-3ac15aba50cf\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.857886 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29a3b16f-f39d-413a-b623-3ac15aba50cf-bundle\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6\" (UID: \"29a3b16f-f39d-413a-b623-3ac15aba50cf\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.876692 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8l97j\" (UniqueName: \"kubernetes.io/projected/29a3b16f-f39d-413a-b623-3ac15aba50cf-kube-api-access-8l97j\") pod \"d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6\" (UID: \"29a3b16f-f39d-413a-b623-3ac15aba50cf\") " pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" Jan 31 14:55:02 crc kubenswrapper[4751]: I0131 14:55:02.967337 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" Jan 31 14:55:03 crc kubenswrapper[4751]: I0131 14:55:03.295637 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6"] Jan 31 14:55:03 crc kubenswrapper[4751]: I0131 14:55:03.392161 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" event={"ID":"29a3b16f-f39d-413a-b623-3ac15aba50cf","Type":"ContainerStarted","Data":"02ab8ec2e4c1f038366997dc0d3a6ec842779cf69755b318b2167207ddaf7560"} Jan 31 14:55:04 crc kubenswrapper[4751]: I0131 14:55:04.402498 4751 generic.go:334] "Generic (PLEG): container finished" podID="29a3b16f-f39d-413a-b623-3ac15aba50cf" containerID="914fa7bc157f85f90159778e4a352984883804f817b8f2353eb69568b5c31c21" exitCode=0 Jan 31 14:55:04 crc kubenswrapper[4751]: I0131 14:55:04.402569 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" event={"ID":"29a3b16f-f39d-413a-b623-3ac15aba50cf","Type":"ContainerDied","Data":"914fa7bc157f85f90159778e4a352984883804f817b8f2353eb69568b5c31c21"} Jan 31 14:55:05 crc kubenswrapper[4751]: I0131 14:55:05.409710 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" event={"ID":"29a3b16f-f39d-413a-b623-3ac15aba50cf","Type":"ContainerStarted","Data":"c816a8193fdacfa313315863400dd00d03b42ba5e5ce2524c35985ffd3fa845b"} Jan 31 14:55:06 crc kubenswrapper[4751]: I0131 14:55:06.420949 4751 generic.go:334] "Generic (PLEG): container finished" podID="29a3b16f-f39d-413a-b623-3ac15aba50cf" containerID="c816a8193fdacfa313315863400dd00d03b42ba5e5ce2524c35985ffd3fa845b" exitCode=0 Jan 31 14:55:06 crc kubenswrapper[4751]: I0131 14:55:06.421521 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" event={"ID":"29a3b16f-f39d-413a-b623-3ac15aba50cf","Type":"ContainerDied","Data":"c816a8193fdacfa313315863400dd00d03b42ba5e5ce2524c35985ffd3fa845b"} Jan 31 14:55:07 crc kubenswrapper[4751]: I0131 14:55:07.429369 4751 generic.go:334] "Generic (PLEG): container finished" podID="29a3b16f-f39d-413a-b623-3ac15aba50cf" containerID="ed9ea3bb8f54f1c0a1685efd692fcb4334fbd2ea55432c305b974a3bf1ca584b" exitCode=0 Jan 31 14:55:07 crc kubenswrapper[4751]: I0131 14:55:07.429442 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" event={"ID":"29a3b16f-f39d-413a-b623-3ac15aba50cf","Type":"ContainerDied","Data":"ed9ea3bb8f54f1c0a1685efd692fcb4334fbd2ea55432c305b974a3bf1ca584b"} Jan 31 14:55:08 crc kubenswrapper[4751]: I0131 14:55:08.842487 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" Jan 31 14:55:08 crc kubenswrapper[4751]: I0131 14:55:08.896677 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:55:08 crc kubenswrapper[4751]: I0131 14:55:08.896765 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:55:08 crc kubenswrapper[4751]: I0131 14:55:08.949731 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8l97j\" (UniqueName: \"kubernetes.io/projected/29a3b16f-f39d-413a-b623-3ac15aba50cf-kube-api-access-8l97j\") pod \"29a3b16f-f39d-413a-b623-3ac15aba50cf\" (UID: \"29a3b16f-f39d-413a-b623-3ac15aba50cf\") " Jan 31 14:55:08 crc kubenswrapper[4751]: I0131 14:55:08.949792 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29a3b16f-f39d-413a-b623-3ac15aba50cf-bundle\") pod \"29a3b16f-f39d-413a-b623-3ac15aba50cf\" (UID: \"29a3b16f-f39d-413a-b623-3ac15aba50cf\") " Jan 31 14:55:08 crc kubenswrapper[4751]: I0131 14:55:08.949825 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29a3b16f-f39d-413a-b623-3ac15aba50cf-util\") pod \"29a3b16f-f39d-413a-b623-3ac15aba50cf\" (UID: \"29a3b16f-f39d-413a-b623-3ac15aba50cf\") " Jan 31 14:55:08 crc kubenswrapper[4751]: I0131 14:55:08.954763 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a3b16f-f39d-413a-b623-3ac15aba50cf-bundle" (OuterVolumeSpecName: "bundle") pod "29a3b16f-f39d-413a-b623-3ac15aba50cf" (UID: "29a3b16f-f39d-413a-b623-3ac15aba50cf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:08 crc kubenswrapper[4751]: I0131 14:55:08.957157 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/29a3b16f-f39d-413a-b623-3ac15aba50cf-kube-api-access-8l97j" (OuterVolumeSpecName: "kube-api-access-8l97j") pod "29a3b16f-f39d-413a-b623-3ac15aba50cf" (UID: "29a3b16f-f39d-413a-b623-3ac15aba50cf"). InnerVolumeSpecName "kube-api-access-8l97j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:55:09 crc kubenswrapper[4751]: I0131 14:55:09.026671 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/29a3b16f-f39d-413a-b623-3ac15aba50cf-util" (OuterVolumeSpecName: "util") pod "29a3b16f-f39d-413a-b623-3ac15aba50cf" (UID: "29a3b16f-f39d-413a-b623-3ac15aba50cf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:55:09 crc kubenswrapper[4751]: I0131 14:55:09.051844 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8l97j\" (UniqueName: \"kubernetes.io/projected/29a3b16f-f39d-413a-b623-3ac15aba50cf-kube-api-access-8l97j\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:09 crc kubenswrapper[4751]: I0131 14:55:09.051898 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/29a3b16f-f39d-413a-b623-3ac15aba50cf-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:09 crc kubenswrapper[4751]: I0131 14:55:09.051918 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/29a3b16f-f39d-413a-b623-3ac15aba50cf-util\") on node \"crc\" DevicePath \"\"" Jan 31 14:55:09 crc kubenswrapper[4751]: I0131 14:55:09.443154 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" event={"ID":"29a3b16f-f39d-413a-b623-3ac15aba50cf","Type":"ContainerDied","Data":"02ab8ec2e4c1f038366997dc0d3a6ec842779cf69755b318b2167207ddaf7560"} Jan 31 14:55:09 crc kubenswrapper[4751]: I0131 14:55:09.443221 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="02ab8ec2e4c1f038366997dc0d3a6ec842779cf69755b318b2167207ddaf7560" Jan 31 14:55:09 crc kubenswrapper[4751]: I0131 14:55:09.443325 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.530404 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl"] Jan 31 14:55:19 crc kubenswrapper[4751]: E0131 14:55:19.531116 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a3b16f-f39d-413a-b623-3ac15aba50cf" containerName="extract" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.531132 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a3b16f-f39d-413a-b623-3ac15aba50cf" containerName="extract" Jan 31 14:55:19 crc kubenswrapper[4751]: E0131 14:55:19.531150 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a3b16f-f39d-413a-b623-3ac15aba50cf" containerName="pull" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.531158 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a3b16f-f39d-413a-b623-3ac15aba50cf" containerName="pull" Jan 31 14:55:19 crc kubenswrapper[4751]: E0131 14:55:19.531171 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="29a3b16f-f39d-413a-b623-3ac15aba50cf" containerName="util" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.531180 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="29a3b16f-f39d-413a-b623-3ac15aba50cf" containerName="util" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.531310 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="29a3b16f-f39d-413a-b623-3ac15aba50cf" containerName="extract" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.531793 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.535251 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-service-cert" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.535403 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"infra-operator-controller-manager-dockercfg-t98gl" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.555461 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl"] Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.594846 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hq5r\" (UniqueName: \"kubernetes.io/projected/6578d137-d120-43b2-99e3-71d4f6525d6c-kube-api-access-7hq5r\") pod \"infra-operator-controller-manager-57f67fdff5-45pkl\" (UID: \"6578d137-d120-43b2-99e3-71d4f6525d6c\") " pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.594928 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6578d137-d120-43b2-99e3-71d4f6525d6c-webhook-cert\") pod \"infra-operator-controller-manager-57f67fdff5-45pkl\" (UID: \"6578d137-d120-43b2-99e3-71d4f6525d6c\") " pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.594965 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6578d137-d120-43b2-99e3-71d4f6525d6c-apiservice-cert\") pod \"infra-operator-controller-manager-57f67fdff5-45pkl\" (UID: \"6578d137-d120-43b2-99e3-71d4f6525d6c\") " pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.696008 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hq5r\" (UniqueName: \"kubernetes.io/projected/6578d137-d120-43b2-99e3-71d4f6525d6c-kube-api-access-7hq5r\") pod \"infra-operator-controller-manager-57f67fdff5-45pkl\" (UID: \"6578d137-d120-43b2-99e3-71d4f6525d6c\") " pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.696102 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6578d137-d120-43b2-99e3-71d4f6525d6c-webhook-cert\") pod \"infra-operator-controller-manager-57f67fdff5-45pkl\" (UID: \"6578d137-d120-43b2-99e3-71d4f6525d6c\") " pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.696137 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6578d137-d120-43b2-99e3-71d4f6525d6c-apiservice-cert\") pod \"infra-operator-controller-manager-57f67fdff5-45pkl\" (UID: \"6578d137-d120-43b2-99e3-71d4f6525d6c\") " pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.702401 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6578d137-d120-43b2-99e3-71d4f6525d6c-webhook-cert\") pod \"infra-operator-controller-manager-57f67fdff5-45pkl\" (UID: \"6578d137-d120-43b2-99e3-71d4f6525d6c\") " pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.706659 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6578d137-d120-43b2-99e3-71d4f6525d6c-apiservice-cert\") pod \"infra-operator-controller-manager-57f67fdff5-45pkl\" (UID: \"6578d137-d120-43b2-99e3-71d4f6525d6c\") " pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.711597 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hq5r\" (UniqueName: \"kubernetes.io/projected/6578d137-d120-43b2-99e3-71d4f6525d6c-kube-api-access-7hq5r\") pod \"infra-operator-controller-manager-57f67fdff5-45pkl\" (UID: \"6578d137-d120-43b2-99e3-71d4f6525d6c\") " pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" Jan 31 14:55:19 crc kubenswrapper[4751]: I0131 14:55:19.858148 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" Jan 31 14:55:20 crc kubenswrapper[4751]: I0131 14:55:20.249247 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl"] Jan 31 14:55:20 crc kubenswrapper[4751]: W0131 14:55:20.260547 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6578d137_d120_43b2_99e3_71d4f6525d6c.slice/crio-fd210a97bb4f47dccbcdbfba3a6c2101ade7c45f9468d34991d8307e718c3b16 WatchSource:0}: Error finding container fd210a97bb4f47dccbcdbfba3a6c2101ade7c45f9468d34991d8307e718c3b16: Status 404 returned error can't find the container with id fd210a97bb4f47dccbcdbfba3a6c2101ade7c45f9468d34991d8307e718c3b16 Jan 31 14:55:20 crc kubenswrapper[4751]: I0131 14:55:20.517739 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" event={"ID":"6578d137-d120-43b2-99e3-71d4f6525d6c","Type":"ContainerStarted","Data":"fd210a97bb4f47dccbcdbfba3a6c2101ade7c45f9468d34991d8307e718c3b16"} Jan 31 14:55:22 crc kubenswrapper[4751]: I0131 14:55:22.531493 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" event={"ID":"6578d137-d120-43b2-99e3-71d4f6525d6c","Type":"ContainerStarted","Data":"5dbebc52897c07a2e7dfa38ad3cf5873d3f7ef9969f655b93dd162236a7cbaa8"} Jan 31 14:55:22 crc kubenswrapper[4751]: I0131 14:55:22.532315 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" Jan 31 14:55:22 crc kubenswrapper[4751]: I0131 14:55:22.554099 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" podStartSLOduration=1.70522749 podStartE2EDuration="3.554082306s" podCreationTimestamp="2026-01-31 14:55:19 +0000 UTC" firstStartedPulling="2026-01-31 14:55:20.262831913 +0000 UTC m=+824.637544798" lastFinishedPulling="2026-01-31 14:55:22.111686729 +0000 UTC m=+826.486399614" observedRunningTime="2026-01-31 14:55:22.549117295 +0000 UTC m=+826.923830220" watchObservedRunningTime="2026-01-31 14:55:22.554082306 +0000 UTC m=+826.928795191" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.242888 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.244997 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.247902 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"galera-openstack-dockercfg-bgvdx" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.248886 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config-data" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.248946 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"kube-root-ca.crt" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.248884 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.249496 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openshift-service-ca.crt" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.251654 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.252710 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.262339 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.317249 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/07a2906d-db30-4578-8b1e-088ca2f20ced-config-data-generated\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.317331 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ng8cd\" (UniqueName: \"kubernetes.io/projected/07a2906d-db30-4578-8b1e-088ca2f20ced-kube-api-access-ng8cd\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.317408 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.317489 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-operator-scripts\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.317609 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-kolla-config\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.317663 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-config-data-default\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.319813 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.325517 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.357472 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.366390 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.418384 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-kolla-config\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.418430 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-config-data-default\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.418466 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.418483 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-kolla-config\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.418498 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-config-data-default\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.418515 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-operator-scripts\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.418541 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/07a2906d-db30-4578-8b1e-088ca2f20ced-config-data-generated\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.418559 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ng8cd\" (UniqueName: \"kubernetes.io/projected/07a2906d-db30-4578-8b1e-088ca2f20ced-kube-api-access-ng8cd\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.418634 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.418698 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nj4k\" (UniqueName: \"kubernetes.io/projected/22459bcc-672e-4390-89ae-2b5fa48ded71-kube-api-access-5nj4k\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.418748 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/22459bcc-672e-4390-89ae-2b5fa48ded71-config-data-generated\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.418844 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-operator-scripts\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.419037 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") device mount path \"/mnt/openstack/pv10\"" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.419445 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-config-data-default\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.419599 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-kolla-config\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.419955 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/07a2906d-db30-4578-8b1e-088ca2f20ced-config-data-generated\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.420302 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-operator-scripts\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.440039 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ng8cd\" (UniqueName: \"kubernetes.io/projected/07a2906d-db30-4578-8b1e-088ca2f20ced-kube-api-access-ng8cd\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.455765 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"openstack-galera-0\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.520577 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwsz6\" (UniqueName: \"kubernetes.io/projected/3fcd9bac-c0cb-4de4-b630-0db07f110da7-kube-api-access-rwsz6\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.520655 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.520692 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-kolla-config\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.520724 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-config-data-default\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.520752 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-operator-scripts\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.520783 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-kolla-config\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.520815 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-config-data-default\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.520851 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3fcd9bac-c0cb-4de4-b630-0db07f110da7-config-data-generated\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.520891 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-operator-scripts\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.520942 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nj4k\" (UniqueName: \"kubernetes.io/projected/22459bcc-672e-4390-89ae-2b5fa48ded71-kube-api-access-5nj4k\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.520972 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/22459bcc-672e-4390-89ae-2b5fa48ded71-config-data-generated\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.521049 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.521787 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") device mount path \"/mnt/openstack/pv06\"" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.522654 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-kolla-config\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.523251 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-config-data-default\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.526194 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-operator-scripts\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.526528 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/22459bcc-672e-4390-89ae-2b5fa48ded71-config-data-generated\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.537336 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nj4k\" (UniqueName: \"kubernetes.io/projected/22459bcc-672e-4390-89ae-2b5fa48ded71-kube-api-access-5nj4k\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.545300 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"openstack-galera-1\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.622015 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.622489 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-kolla-config\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.622558 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-config-data-default\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.622587 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3fcd9bac-c0cb-4de4-b630-0db07f110da7-config-data-generated\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.622614 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-operator-scripts\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.622669 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.622750 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwsz6\" (UniqueName: \"kubernetes.io/projected/3fcd9bac-c0cb-4de4-b630-0db07f110da7-kube-api-access-rwsz6\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.623343 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") device mount path \"/mnt/openstack/pv12\"" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.623568 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-config-data-default\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.625800 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-operator-scripts\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.630911 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.631425 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3fcd9bac-c0cb-4de4-b630-0db07f110da7-config-data-generated\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.632232 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-kolla-config\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.645388 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwsz6\" (UniqueName: \"kubernetes.io/projected/3fcd9bac-c0cb-4de4-b630-0db07f110da7-kube-api-access-rwsz6\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.653828 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"openstack-galera-2\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.669629 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.866602 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Jan 31 14:55:27 crc kubenswrapper[4751]: I0131 14:55:27.909404 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Jan 31 14:55:27 crc kubenswrapper[4751]: W0131 14:55:27.915727 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod22459bcc_672e_4390_89ae_2b5fa48ded71.slice/crio-6b6faf7aa73840af2027f08065efac105f4b0ad43c2d2c60890bf024de99e2ca WatchSource:0}: Error finding container 6b6faf7aa73840af2027f08065efac105f4b0ad43c2d2c60890bf024de99e2ca: Status 404 returned error can't find the container with id 6b6faf7aa73840af2027f08065efac105f4b0ad43c2d2c60890bf024de99e2ca Jan 31 14:55:28 crc kubenswrapper[4751]: I0131 14:55:28.186365 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Jan 31 14:55:28 crc kubenswrapper[4751]: W0131 14:55:28.186681 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3fcd9bac_c0cb_4de4_b630_0db07f110da7.slice/crio-4483e874a8f4e15e4dfcdca687206a7af35257a8c5ba1cb56d33195e769924f9 WatchSource:0}: Error finding container 4483e874a8f4e15e4dfcdca687206a7af35257a8c5ba1cb56d33195e769924f9: Status 404 returned error can't find the container with id 4483e874a8f4e15e4dfcdca687206a7af35257a8c5ba1cb56d33195e769924f9 Jan 31 14:55:28 crc kubenswrapper[4751]: I0131 14:55:28.567332 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"07a2906d-db30-4578-8b1e-088ca2f20ced","Type":"ContainerStarted","Data":"2161c6d33cfda8a5b256a8346412b18ad489372437142a6a6602a50128a7c01a"} Jan 31 14:55:28 crc kubenswrapper[4751]: I0131 14:55:28.569168 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"22459bcc-672e-4390-89ae-2b5fa48ded71","Type":"ContainerStarted","Data":"6b6faf7aa73840af2027f08065efac105f4b0ad43c2d2c60890bf024de99e2ca"} Jan 31 14:55:28 crc kubenswrapper[4751]: I0131 14:55:28.570447 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"3fcd9bac-c0cb-4de4-b630-0db07f110da7","Type":"ContainerStarted","Data":"4483e874a8f4e15e4dfcdca687206a7af35257a8c5ba1cb56d33195e769924f9"} Jan 31 14:55:29 crc kubenswrapper[4751]: I0131 14:55:29.865966 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" Jan 31 14:55:31 crc kubenswrapper[4751]: I0131 14:55:31.590980 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/memcached-0"] Jan 31 14:55:31 crc kubenswrapper[4751]: I0131 14:55:31.591851 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Jan 31 14:55:31 crc kubenswrapper[4751]: I0131 14:55:31.595352 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"memcached-config-data" Jan 31 14:55:31 crc kubenswrapper[4751]: I0131 14:55:31.595758 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"memcached-memcached-dockercfg-c4g7x" Jan 31 14:55:31 crc kubenswrapper[4751]: I0131 14:55:31.637250 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/memcached-0"] Jan 31 14:55:31 crc kubenswrapper[4751]: I0131 14:55:31.681257 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-config-data\") pod \"memcached-0\" (UID: \"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 14:55:31 crc kubenswrapper[4751]: I0131 14:55:31.681308 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-kolla-config\") pod \"memcached-0\" (UID: \"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 14:55:31 crc kubenswrapper[4751]: I0131 14:55:31.681340 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbtbr\" (UniqueName: \"kubernetes.io/projected/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-kube-api-access-gbtbr\") pod \"memcached-0\" (UID: \"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 14:55:31 crc kubenswrapper[4751]: I0131 14:55:31.782769 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-config-data\") pod \"memcached-0\" (UID: \"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 14:55:31 crc kubenswrapper[4751]: I0131 14:55:31.782850 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-kolla-config\") pod \"memcached-0\" (UID: \"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 14:55:31 crc kubenswrapper[4751]: I0131 14:55:31.782905 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gbtbr\" (UniqueName: \"kubernetes.io/projected/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-kube-api-access-gbtbr\") pod \"memcached-0\" (UID: \"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 14:55:31 crc kubenswrapper[4751]: I0131 14:55:31.783692 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-kolla-config\") pod \"memcached-0\" (UID: \"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 14:55:31 crc kubenswrapper[4751]: I0131 14:55:31.783718 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-config-data\") pod \"memcached-0\" (UID: \"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 14:55:31 crc kubenswrapper[4751]: I0131 14:55:31.814328 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gbtbr\" (UniqueName: \"kubernetes.io/projected/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-kube-api-access-gbtbr\") pod \"memcached-0\" (UID: \"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c\") " pod="glance-kuttl-tests/memcached-0" Jan 31 14:55:31 crc kubenswrapper[4751]: I0131 14:55:31.910896 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Jan 31 14:55:32 crc kubenswrapper[4751]: I0131 14:55:32.525348 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-2wvgm"] Jan 31 14:55:32 crc kubenswrapper[4751]: I0131 14:55:32.526351 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" Jan 31 14:55:32 crc kubenswrapper[4751]: I0131 14:55:32.535689 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-index-dockercfg-sm9tx" Jan 31 14:55:32 crc kubenswrapper[4751]: I0131 14:55:32.545871 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-2wvgm"] Jan 31 14:55:32 crc kubenswrapper[4751]: I0131 14:55:32.700472 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzzlf\" (UniqueName: \"kubernetes.io/projected/44c515c1-f30f-44da-8959-cfd2530b46b7-kube-api-access-rzzlf\") pod \"rabbitmq-cluster-operator-index-2wvgm\" (UID: \"44c515c1-f30f-44da-8959-cfd2530b46b7\") " pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" Jan 31 14:55:32 crc kubenswrapper[4751]: I0131 14:55:32.802062 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzzlf\" (UniqueName: \"kubernetes.io/projected/44c515c1-f30f-44da-8959-cfd2530b46b7-kube-api-access-rzzlf\") pod \"rabbitmq-cluster-operator-index-2wvgm\" (UID: \"44c515c1-f30f-44da-8959-cfd2530b46b7\") " pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" Jan 31 14:55:32 crc kubenswrapper[4751]: I0131 14:55:32.821416 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzzlf\" (UniqueName: \"kubernetes.io/projected/44c515c1-f30f-44da-8959-cfd2530b46b7-kube-api-access-rzzlf\") pod \"rabbitmq-cluster-operator-index-2wvgm\" (UID: \"44c515c1-f30f-44da-8959-cfd2530b46b7\") " pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" Jan 31 14:55:32 crc kubenswrapper[4751]: I0131 14:55:32.854292 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" Jan 31 14:55:36 crc kubenswrapper[4751]: I0131 14:55:36.213327 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/memcached-0"] Jan 31 14:55:36 crc kubenswrapper[4751]: I0131 14:55:36.237736 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-2wvgm"] Jan 31 14:55:36 crc kubenswrapper[4751]: W0131 14:55:36.247294 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod44c515c1_f30f_44da_8959_cfd2530b46b7.slice/crio-b07bdb6979a897c42db641896c014136c4b8817fab040635c752ccba6b137d19 WatchSource:0}: Error finding container b07bdb6979a897c42db641896c014136c4b8817fab040635c752ccba6b137d19: Status 404 returned error can't find the container with id b07bdb6979a897c42db641896c014136c4b8817fab040635c752ccba6b137d19 Jan 31 14:55:36 crc kubenswrapper[4751]: I0131 14:55:36.618897 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" event={"ID":"44c515c1-f30f-44da-8959-cfd2530b46b7","Type":"ContainerStarted","Data":"b07bdb6979a897c42db641896c014136c4b8817fab040635c752ccba6b137d19"} Jan 31 14:55:36 crc kubenswrapper[4751]: I0131 14:55:36.619813 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c","Type":"ContainerStarted","Data":"cf904354b92714c266cf175421ba71e5ed9cb49d7ba4bbc0c72df9a09635ce8a"} Jan 31 14:55:38 crc kubenswrapper[4751]: I0131 14:55:38.632709 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"07a2906d-db30-4578-8b1e-088ca2f20ced","Type":"ContainerStarted","Data":"1dd59d047e2f99760bb45d01f43a08d4aeb1e5d45326b19b0123bcf023e41f96"} Jan 31 14:55:38 crc kubenswrapper[4751]: I0131 14:55:38.641220 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"22459bcc-672e-4390-89ae-2b5fa48ded71","Type":"ContainerStarted","Data":"0844a74085d7d943d717fb7babb3a7b7db796dff92dfd7c3894a2eccb22eb987"} Jan 31 14:55:38 crc kubenswrapper[4751]: I0131 14:55:38.644189 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"3fcd9bac-c0cb-4de4-b630-0db07f110da7","Type":"ContainerStarted","Data":"b0d3ea91f474d5f0241c4f1e0b20927cdf5d85e229fe91747902d0e90daf242d"} Jan 31 14:55:38 crc kubenswrapper[4751]: I0131 14:55:38.896811 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:55:38 crc kubenswrapper[4751]: I0131 14:55:38.896875 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:55:38 crc kubenswrapper[4751]: I0131 14:55:38.896926 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:55:38 crc kubenswrapper[4751]: I0131 14:55:38.897493 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4d4f92719c72ec0adb31e02a10d5c8bcb4b1a9b3bfb5b0e7ed8cfdbc4a53235"} pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 14:55:38 crc kubenswrapper[4751]: I0131 14:55:38.897563 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" containerID="cri-o://f4d4f92719c72ec0adb31e02a10d5c8bcb4b1a9b3bfb5b0e7ed8cfdbc4a53235" gracePeriod=600 Jan 31 14:55:39 crc kubenswrapper[4751]: I0131 14:55:39.652275 4751 generic.go:334] "Generic (PLEG): container finished" podID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerID="f4d4f92719c72ec0adb31e02a10d5c8bcb4b1a9b3bfb5b0e7ed8cfdbc4a53235" exitCode=0 Jan 31 14:55:39 crc kubenswrapper[4751]: I0131 14:55:39.652404 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerDied","Data":"f4d4f92719c72ec0adb31e02a10d5c8bcb4b1a9b3bfb5b0e7ed8cfdbc4a53235"} Jan 31 14:55:39 crc kubenswrapper[4751]: I0131 14:55:39.652593 4751 scope.go:117] "RemoveContainer" containerID="ef29f0f695de11b302d97f5ade678c0ae9fdc43953c2430b685d7fd276ee3217" Jan 31 14:55:40 crc kubenswrapper[4751]: I0131 14:55:40.669664 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerStarted","Data":"dc064826cd8a78005216541d25736856cc2dd920bfe44778b79dbfd2f76ed341"} Jan 31 14:55:41 crc kubenswrapper[4751]: I0131 14:55:41.675807 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c","Type":"ContainerStarted","Data":"7d9c0759f36bb098c88e33085270280041e2db4b3aa27d3f10dea45195deff2f"} Jan 31 14:55:41 crc kubenswrapper[4751]: I0131 14:55:41.676343 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/memcached-0" Jan 31 14:55:41 crc kubenswrapper[4751]: I0131 14:55:41.678408 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" event={"ID":"44c515c1-f30f-44da-8959-cfd2530b46b7","Type":"ContainerStarted","Data":"584fb5ac2ba8eb206c7f2718045838c65ff9fa87a8c959a34945da15defa3f15"} Jan 31 14:55:41 crc kubenswrapper[4751]: I0131 14:55:41.695116 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/memcached-0" podStartSLOduration=8.505498546 podStartE2EDuration="10.695098411s" podCreationTimestamp="2026-01-31 14:55:31 +0000 UTC" firstStartedPulling="2026-01-31 14:55:37.938743063 +0000 UTC m=+842.313455948" lastFinishedPulling="2026-01-31 14:55:40.128342928 +0000 UTC m=+844.503055813" observedRunningTime="2026-01-31 14:55:41.692266507 +0000 UTC m=+846.066979392" watchObservedRunningTime="2026-01-31 14:55:41.695098411 +0000 UTC m=+846.069811306" Jan 31 14:55:41 crc kubenswrapper[4751]: I0131 14:55:41.709604 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" podStartSLOduration=6.133324191 podStartE2EDuration="9.709585423s" podCreationTimestamp="2026-01-31 14:55:32 +0000 UTC" firstStartedPulling="2026-01-31 14:55:37.93366273 +0000 UTC m=+842.308375615" lastFinishedPulling="2026-01-31 14:55:41.509923962 +0000 UTC m=+845.884636847" observedRunningTime="2026-01-31 14:55:41.705718631 +0000 UTC m=+846.080431516" watchObservedRunningTime="2026-01-31 14:55:41.709585423 +0000 UTC m=+846.084298308" Jan 31 14:55:42 crc kubenswrapper[4751]: I0131 14:55:42.688352 4751 generic.go:334] "Generic (PLEG): container finished" podID="22459bcc-672e-4390-89ae-2b5fa48ded71" containerID="0844a74085d7d943d717fb7babb3a7b7db796dff92dfd7c3894a2eccb22eb987" exitCode=0 Jan 31 14:55:42 crc kubenswrapper[4751]: I0131 14:55:42.688424 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"22459bcc-672e-4390-89ae-2b5fa48ded71","Type":"ContainerDied","Data":"0844a74085d7d943d717fb7babb3a7b7db796dff92dfd7c3894a2eccb22eb987"} Jan 31 14:55:42 crc kubenswrapper[4751]: I0131 14:55:42.695107 4751 generic.go:334] "Generic (PLEG): container finished" podID="3fcd9bac-c0cb-4de4-b630-0db07f110da7" containerID="b0d3ea91f474d5f0241c4f1e0b20927cdf5d85e229fe91747902d0e90daf242d" exitCode=0 Jan 31 14:55:42 crc kubenswrapper[4751]: I0131 14:55:42.695167 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"3fcd9bac-c0cb-4de4-b630-0db07f110da7","Type":"ContainerDied","Data":"b0d3ea91f474d5f0241c4f1e0b20927cdf5d85e229fe91747902d0e90daf242d"} Jan 31 14:55:42 crc kubenswrapper[4751]: I0131 14:55:42.699269 4751 generic.go:334] "Generic (PLEG): container finished" podID="07a2906d-db30-4578-8b1e-088ca2f20ced" containerID="1dd59d047e2f99760bb45d01f43a08d4aeb1e5d45326b19b0123bcf023e41f96" exitCode=0 Jan 31 14:55:42 crc kubenswrapper[4751]: I0131 14:55:42.699604 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"07a2906d-db30-4578-8b1e-088ca2f20ced","Type":"ContainerDied","Data":"1dd59d047e2f99760bb45d01f43a08d4aeb1e5d45326b19b0123bcf023e41f96"} Jan 31 14:55:42 crc kubenswrapper[4751]: I0131 14:55:42.855126 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" Jan 31 14:55:42 crc kubenswrapper[4751]: I0131 14:55:42.855325 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" Jan 31 14:55:42 crc kubenswrapper[4751]: I0131 14:55:42.895960 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" Jan 31 14:55:43 crc kubenswrapper[4751]: I0131 14:55:43.711494 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"3fcd9bac-c0cb-4de4-b630-0db07f110da7","Type":"ContainerStarted","Data":"eeb0727f6d7a3d1d251766b50edc1058bc460aa581ba0d5f746de288b9b3f16b"} Jan 31 14:55:43 crc kubenswrapper[4751]: I0131 14:55:43.716465 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"07a2906d-db30-4578-8b1e-088ca2f20ced","Type":"ContainerStarted","Data":"0e2fc16f141e03061cef807a2713d8f66a6c5d9ed59205690727526ba6a882ea"} Jan 31 14:55:43 crc kubenswrapper[4751]: I0131 14:55:43.721999 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"22459bcc-672e-4390-89ae-2b5fa48ded71","Type":"ContainerStarted","Data":"6234bbbcfac3eddf715e4285a6b3d7b6a0aff6d850ad2df858a2deee34d9f571"} Jan 31 14:55:43 crc kubenswrapper[4751]: I0131 14:55:43.747491 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-2" podStartSLOduration=7.989230855 podStartE2EDuration="17.74746622s" podCreationTimestamp="2026-01-31 14:55:26 +0000 UTC" firstStartedPulling="2026-01-31 14:55:28.18967361 +0000 UTC m=+832.564386495" lastFinishedPulling="2026-01-31 14:55:37.947908975 +0000 UTC m=+842.322621860" observedRunningTime="2026-01-31 14:55:43.742353895 +0000 UTC m=+848.117066830" watchObservedRunningTime="2026-01-31 14:55:43.74746622 +0000 UTC m=+848.122179135" Jan 31 14:55:43 crc kubenswrapper[4751]: I0131 14:55:43.783292 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-1" podStartSLOduration=7.754811398 podStartE2EDuration="17.783263903s" podCreationTimestamp="2026-01-31 14:55:26 +0000 UTC" firstStartedPulling="2026-01-31 14:55:27.919408509 +0000 UTC m=+832.294121394" lastFinishedPulling="2026-01-31 14:55:37.947861014 +0000 UTC m=+842.322573899" observedRunningTime="2026-01-31 14:55:43.777272055 +0000 UTC m=+848.151985010" watchObservedRunningTime="2026-01-31 14:55:43.783263903 +0000 UTC m=+848.157976818" Jan 31 14:55:43 crc kubenswrapper[4751]: I0131 14:55:43.811753 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstack-galera-0" podStartSLOduration=7.705973131 podStartE2EDuration="17.811727043s" podCreationTimestamp="2026-01-31 14:55:26 +0000 UTC" firstStartedPulling="2026-01-31 14:55:27.885479135 +0000 UTC m=+832.260192020" lastFinishedPulling="2026-01-31 14:55:37.991233047 +0000 UTC m=+842.365945932" observedRunningTime="2026-01-31 14:55:43.802956022 +0000 UTC m=+848.177668937" watchObservedRunningTime="2026-01-31 14:55:43.811727043 +0000 UTC m=+848.186439958" Jan 31 14:55:46 crc kubenswrapper[4751]: I0131 14:55:46.916366 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/memcached-0" Jan 31 14:55:47 crc kubenswrapper[4751]: I0131 14:55:47.622859 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:47 crc kubenswrapper[4751]: I0131 14:55:47.622917 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:55:47 crc kubenswrapper[4751]: I0131 14:55:47.632136 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:47 crc kubenswrapper[4751]: I0131 14:55:47.632181 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:55:47 crc kubenswrapper[4751]: I0131 14:55:47.670403 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:47 crc kubenswrapper[4751]: I0131 14:55:47.670441 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:51 crc kubenswrapper[4751]: I0131 14:55:51.820857 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:51 crc kubenswrapper[4751]: I0131 14:55:51.906632 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 14:55:52 crc kubenswrapper[4751]: I0131 14:55:52.911223 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" Jan 31 14:55:56 crc kubenswrapper[4751]: I0131 14:55:56.305003 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/root-account-create-update-4gxnx"] Jan 31 14:55:56 crc kubenswrapper[4751]: I0131 14:55:56.306195 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-4gxnx" Jan 31 14:55:56 crc kubenswrapper[4751]: I0131 14:55:56.309938 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 31 14:55:56 crc kubenswrapper[4751]: I0131 14:55:56.315720 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/root-account-create-update-4gxnx"] Jan 31 14:55:56 crc kubenswrapper[4751]: I0131 14:55:56.340365 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf-operator-scripts\") pod \"root-account-create-update-4gxnx\" (UID: \"6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf\") " pod="glance-kuttl-tests/root-account-create-update-4gxnx" Jan 31 14:55:56 crc kubenswrapper[4751]: I0131 14:55:56.340538 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2fqsm\" (UniqueName: \"kubernetes.io/projected/6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf-kube-api-access-2fqsm\") pod \"root-account-create-update-4gxnx\" (UID: \"6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf\") " pod="glance-kuttl-tests/root-account-create-update-4gxnx" Jan 31 14:55:56 crc kubenswrapper[4751]: I0131 14:55:56.442624 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf-operator-scripts\") pod \"root-account-create-update-4gxnx\" (UID: \"6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf\") " pod="glance-kuttl-tests/root-account-create-update-4gxnx" Jan 31 14:55:56 crc kubenswrapper[4751]: I0131 14:55:56.442814 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2fqsm\" (UniqueName: \"kubernetes.io/projected/6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf-kube-api-access-2fqsm\") pod \"root-account-create-update-4gxnx\" (UID: \"6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf\") " pod="glance-kuttl-tests/root-account-create-update-4gxnx" Jan 31 14:55:56 crc kubenswrapper[4751]: I0131 14:55:56.443838 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf-operator-scripts\") pod \"root-account-create-update-4gxnx\" (UID: \"6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf\") " pod="glance-kuttl-tests/root-account-create-update-4gxnx" Jan 31 14:55:56 crc kubenswrapper[4751]: I0131 14:55:56.466661 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2fqsm\" (UniqueName: \"kubernetes.io/projected/6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf-kube-api-access-2fqsm\") pod \"root-account-create-update-4gxnx\" (UID: \"6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf\") " pod="glance-kuttl-tests/root-account-create-update-4gxnx" Jan 31 14:55:56 crc kubenswrapper[4751]: I0131 14:55:56.621657 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-4gxnx" Jan 31 14:55:56 crc kubenswrapper[4751]: I0131 14:55:56.889690 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/root-account-create-update-4gxnx"] Jan 31 14:55:57 crc kubenswrapper[4751]: I0131 14:55:57.818336 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/root-account-create-update-4gxnx" event={"ID":"6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf","Type":"ContainerStarted","Data":"b26b741fdf290763b6328eb1a8c5b1a7f048f2aecba802a031d85386bf813c0e"} Jan 31 14:55:57 crc kubenswrapper[4751]: I0131 14:55:57.818726 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/root-account-create-update-4gxnx" event={"ID":"6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf","Type":"ContainerStarted","Data":"0a2861c5cc0f595bf81985741b866cc835b0dcdfb494adb385879ca0b4137437"} Jan 31 14:55:57 crc kubenswrapper[4751]: I0131 14:55:57.844024 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/root-account-create-update-4gxnx" podStartSLOduration=1.8439852060000002 podStartE2EDuration="1.843985206s" podCreationTimestamp="2026-01-31 14:55:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:55:57.840985247 +0000 UTC m=+862.215698232" watchObservedRunningTime="2026-01-31 14:55:57.843985206 +0000 UTC m=+862.218698101" Jan 31 14:55:58 crc kubenswrapper[4751]: I0131 14:55:58.825669 4751 generic.go:334] "Generic (PLEG): container finished" podID="6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf" containerID="b26b741fdf290763b6328eb1a8c5b1a7f048f2aecba802a031d85386bf813c0e" exitCode=0 Jan 31 14:55:58 crc kubenswrapper[4751]: I0131 14:55:58.825804 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/root-account-create-update-4gxnx" event={"ID":"6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf","Type":"ContainerDied","Data":"b26b741fdf290763b6328eb1a8c5b1a7f048f2aecba802a031d85386bf813c0e"} Jan 31 14:56:00 crc kubenswrapper[4751]: I0131 14:56:00.328330 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9w6rf"] Jan 31 14:56:00 crc kubenswrapper[4751]: I0131 14:56:00.330899 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:00 crc kubenswrapper[4751]: I0131 14:56:00.348175 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9w6rf"] Jan 31 14:56:00 crc kubenswrapper[4751]: I0131 14:56:00.404956 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9636cf-e895-422b-8064-ce6d652a85d1-utilities\") pod \"redhat-operators-9w6rf\" (UID: \"db9636cf-e895-422b-8064-ce6d652a85d1\") " pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:00 crc kubenswrapper[4751]: I0131 14:56:00.405038 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w2b4z\" (UniqueName: \"kubernetes.io/projected/db9636cf-e895-422b-8064-ce6d652a85d1-kube-api-access-w2b4z\") pod \"redhat-operators-9w6rf\" (UID: \"db9636cf-e895-422b-8064-ce6d652a85d1\") " pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:00 crc kubenswrapper[4751]: I0131 14:56:00.405112 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9636cf-e895-422b-8064-ce6d652a85d1-catalog-content\") pod \"redhat-operators-9w6rf\" (UID: \"db9636cf-e895-422b-8064-ce6d652a85d1\") " pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:00 crc kubenswrapper[4751]: I0131 14:56:00.506816 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w2b4z\" (UniqueName: \"kubernetes.io/projected/db9636cf-e895-422b-8064-ce6d652a85d1-kube-api-access-w2b4z\") pod \"redhat-operators-9w6rf\" (UID: \"db9636cf-e895-422b-8064-ce6d652a85d1\") " pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:00 crc kubenswrapper[4751]: I0131 14:56:00.506865 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9636cf-e895-422b-8064-ce6d652a85d1-catalog-content\") pod \"redhat-operators-9w6rf\" (UID: \"db9636cf-e895-422b-8064-ce6d652a85d1\") " pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:00 crc kubenswrapper[4751]: I0131 14:56:00.506947 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9636cf-e895-422b-8064-ce6d652a85d1-utilities\") pod \"redhat-operators-9w6rf\" (UID: \"db9636cf-e895-422b-8064-ce6d652a85d1\") " pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:00 crc kubenswrapper[4751]: I0131 14:56:00.507382 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9636cf-e895-422b-8064-ce6d652a85d1-utilities\") pod \"redhat-operators-9w6rf\" (UID: \"db9636cf-e895-422b-8064-ce6d652a85d1\") " pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:00 crc kubenswrapper[4751]: I0131 14:56:00.507439 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9636cf-e895-422b-8064-ce6d652a85d1-catalog-content\") pod \"redhat-operators-9w6rf\" (UID: \"db9636cf-e895-422b-8064-ce6d652a85d1\") " pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:00 crc kubenswrapper[4751]: I0131 14:56:00.527234 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w2b4z\" (UniqueName: \"kubernetes.io/projected/db9636cf-e895-422b-8064-ce6d652a85d1-kube-api-access-w2b4z\") pod \"redhat-operators-9w6rf\" (UID: \"db9636cf-e895-422b-8064-ce6d652a85d1\") " pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:00 crc kubenswrapper[4751]: I0131 14:56:00.671923 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:03 crc kubenswrapper[4751]: I0131 14:56:03.774527 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb"] Jan 31 14:56:03 crc kubenswrapper[4751]: I0131 14:56:03.776314 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" Jan 31 14:56:03 crc kubenswrapper[4751]: I0131 14:56:03.783156 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wxkjx" Jan 31 14:56:03 crc kubenswrapper[4751]: I0131 14:56:03.790167 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb"] Jan 31 14:56:03 crc kubenswrapper[4751]: I0131 14:56:03.850769 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a525382f-29ee-4393-9e5b-1b3e989a1bc3-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb\" (UID: \"a525382f-29ee-4393-9e5b-1b3e989a1bc3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" Jan 31 14:56:03 crc kubenswrapper[4751]: I0131 14:56:03.850847 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k9mlm\" (UniqueName: \"kubernetes.io/projected/a525382f-29ee-4393-9e5b-1b3e989a1bc3-kube-api-access-k9mlm\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb\" (UID: \"a525382f-29ee-4393-9e5b-1b3e989a1bc3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" Jan 31 14:56:03 crc kubenswrapper[4751]: I0131 14:56:03.851036 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a525382f-29ee-4393-9e5b-1b3e989a1bc3-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb\" (UID: \"a525382f-29ee-4393-9e5b-1b3e989a1bc3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" Jan 31 14:56:03 crc kubenswrapper[4751]: I0131 14:56:03.952980 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a525382f-29ee-4393-9e5b-1b3e989a1bc3-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb\" (UID: \"a525382f-29ee-4393-9e5b-1b3e989a1bc3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" Jan 31 14:56:03 crc kubenswrapper[4751]: I0131 14:56:03.953050 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k9mlm\" (UniqueName: \"kubernetes.io/projected/a525382f-29ee-4393-9e5b-1b3e989a1bc3-kube-api-access-k9mlm\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb\" (UID: \"a525382f-29ee-4393-9e5b-1b3e989a1bc3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" Jan 31 14:56:03 crc kubenswrapper[4751]: I0131 14:56:03.953132 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a525382f-29ee-4393-9e5b-1b3e989a1bc3-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb\" (UID: \"a525382f-29ee-4393-9e5b-1b3e989a1bc3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" Jan 31 14:56:03 crc kubenswrapper[4751]: I0131 14:56:03.953979 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a525382f-29ee-4393-9e5b-1b3e989a1bc3-bundle\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb\" (UID: \"a525382f-29ee-4393-9e5b-1b3e989a1bc3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" Jan 31 14:56:03 crc kubenswrapper[4751]: I0131 14:56:03.953986 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a525382f-29ee-4393-9e5b-1b3e989a1bc3-util\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb\" (UID: \"a525382f-29ee-4393-9e5b-1b3e989a1bc3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" Jan 31 14:56:03 crc kubenswrapper[4751]: I0131 14:56:03.975254 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k9mlm\" (UniqueName: \"kubernetes.io/projected/a525382f-29ee-4393-9e5b-1b3e989a1bc3-kube-api-access-k9mlm\") pod \"9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb\" (UID: \"a525382f-29ee-4393-9e5b-1b3e989a1bc3\") " pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.035372 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-4gxnx" Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.097731 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.156507 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2fqsm\" (UniqueName: \"kubernetes.io/projected/6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf-kube-api-access-2fqsm\") pod \"6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf\" (UID: \"6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf\") " Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.156562 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf-operator-scripts\") pod \"6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf\" (UID: \"6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf\") " Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.157372 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf" (UID: "6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.162778 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf-kube-api-access-2fqsm" (OuterVolumeSpecName: "kube-api-access-2fqsm") pod "6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf" (UID: "6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf"). InnerVolumeSpecName "kube-api-access-2fqsm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.258750 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2fqsm\" (UniqueName: \"kubernetes.io/projected/6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf-kube-api-access-2fqsm\") on node \"crc\" DevicePath \"\"" Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.258982 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.395253 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9w6rf"] Jan 31 14:56:04 crc kubenswrapper[4751]: W0131 14:56:04.406313 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddb9636cf_e895_422b_8064_ce6d652a85d1.slice/crio-dc0b582d511f1669c6b4a4462f76d30ea3f273270449a3ea86a4620ceb3d1f30 WatchSource:0}: Error finding container dc0b582d511f1669c6b4a4462f76d30ea3f273270449a3ea86a4620ceb3d1f30: Status 404 returned error can't find the container with id dc0b582d511f1669c6b4a4462f76d30ea3f273270449a3ea86a4620ceb3d1f30 Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.499564 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb"] Jan 31 14:56:04 crc kubenswrapper[4751]: W0131 14:56:04.507766 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda525382f_29ee_4393_9e5b_1b3e989a1bc3.slice/crio-1f53cc0349cc95b3204177d9541764f06423422d60741c2505c0c3dc21ce5ef9 WatchSource:0}: Error finding container 1f53cc0349cc95b3204177d9541764f06423422d60741c2505c0c3dc21ce5ef9: Status 404 returned error can't find the container with id 1f53cc0349cc95b3204177d9541764f06423422d60741c2505c0c3dc21ce5ef9 Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.860371 4751 generic.go:334] "Generic (PLEG): container finished" podID="a525382f-29ee-4393-9e5b-1b3e989a1bc3" containerID="ecd0273950524364ff0a405d7ba30af3f5ab2065b0d4986c88176cf55c6327d6" exitCode=0 Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.860419 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" event={"ID":"a525382f-29ee-4393-9e5b-1b3e989a1bc3","Type":"ContainerDied","Data":"ecd0273950524364ff0a405d7ba30af3f5ab2065b0d4986c88176cf55c6327d6"} Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.860686 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" event={"ID":"a525382f-29ee-4393-9e5b-1b3e989a1bc3","Type":"ContainerStarted","Data":"1f53cc0349cc95b3204177d9541764f06423422d60741c2505c0c3dc21ce5ef9"} Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.863172 4751 generic.go:334] "Generic (PLEG): container finished" podID="db9636cf-e895-422b-8064-ce6d652a85d1" containerID="526b390ea0ece0e6fceffb4c1922e3d8cdf235036daf439a7f234526b708d9ea" exitCode=0 Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.863323 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9w6rf" event={"ID":"db9636cf-e895-422b-8064-ce6d652a85d1","Type":"ContainerDied","Data":"526b390ea0ece0e6fceffb4c1922e3d8cdf235036daf439a7f234526b708d9ea"} Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.863423 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9w6rf" event={"ID":"db9636cf-e895-422b-8064-ce6d652a85d1","Type":"ContainerStarted","Data":"dc0b582d511f1669c6b4a4462f76d30ea3f273270449a3ea86a4620ceb3d1f30"} Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.866198 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/root-account-create-update-4gxnx" event={"ID":"6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf","Type":"ContainerDied","Data":"0a2861c5cc0f595bf81985741b866cc835b0dcdfb494adb385879ca0b4137437"} Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.866228 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a2861c5cc0f595bf81985741b866cc835b0dcdfb494adb385879ca0b4137437" Jan 31 14:56:04 crc kubenswrapper[4751]: I0131 14:56:04.866297 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-4gxnx" Jan 31 14:56:05 crc kubenswrapper[4751]: I0131 14:56:05.873644 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9w6rf" event={"ID":"db9636cf-e895-422b-8064-ce6d652a85d1","Type":"ContainerStarted","Data":"72edda77bd270bc034f43aee7ab36fd6299606986815e322ded2d2de14aa053b"} Jan 31 14:56:06 crc kubenswrapper[4751]: E0131 14:56:06.476646 4751 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 38.102.83.98:52020->38.102.83.98:44981: write tcp 38.102.83.98:52020->38.102.83.98:44981: write: broken pipe Jan 31 14:56:06 crc kubenswrapper[4751]: I0131 14:56:06.882425 4751 generic.go:334] "Generic (PLEG): container finished" podID="db9636cf-e895-422b-8064-ce6d652a85d1" containerID="72edda77bd270bc034f43aee7ab36fd6299606986815e322ded2d2de14aa053b" exitCode=0 Jan 31 14:56:06 crc kubenswrapper[4751]: I0131 14:56:06.882540 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9w6rf" event={"ID":"db9636cf-e895-422b-8064-ce6d652a85d1","Type":"ContainerDied","Data":"72edda77bd270bc034f43aee7ab36fd6299606986815e322ded2d2de14aa053b"} Jan 31 14:56:06 crc kubenswrapper[4751]: I0131 14:56:06.884787 4751 generic.go:334] "Generic (PLEG): container finished" podID="a525382f-29ee-4393-9e5b-1b3e989a1bc3" containerID="acab140e6ca6aa95c6844fc3952eecbc060037dc14e3f1b6a536e962fd34fb0c" exitCode=0 Jan 31 14:56:06 crc kubenswrapper[4751]: I0131 14:56:06.884835 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" event={"ID":"a525382f-29ee-4393-9e5b-1b3e989a1bc3","Type":"ContainerDied","Data":"acab140e6ca6aa95c6844fc3952eecbc060037dc14e3f1b6a536e962fd34fb0c"} Jan 31 14:56:07 crc kubenswrapper[4751]: I0131 14:56:07.756552 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/openstack-galera-2" podUID="3fcd9bac-c0cb-4de4-b630-0db07f110da7" containerName="galera" probeResult="failure" output=< Jan 31 14:56:07 crc kubenswrapper[4751]: wsrep_local_state_comment (Donor/Desynced) differs from Synced Jan 31 14:56:07 crc kubenswrapper[4751]: > Jan 31 14:56:07 crc kubenswrapper[4751]: I0131 14:56:07.893895 4751 generic.go:334] "Generic (PLEG): container finished" podID="a525382f-29ee-4393-9e5b-1b3e989a1bc3" containerID="f84e5f08594d3f72dc6ce544065026534e30bfc6f05c4074d6d95900baad7f74" exitCode=0 Jan 31 14:56:07 crc kubenswrapper[4751]: I0131 14:56:07.893952 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" event={"ID":"a525382f-29ee-4393-9e5b-1b3e989a1bc3","Type":"ContainerDied","Data":"f84e5f08594d3f72dc6ce544065026534e30bfc6f05c4074d6d95900baad7f74"} Jan 31 14:56:08 crc kubenswrapper[4751]: I0131 14:56:08.902144 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9w6rf" event={"ID":"db9636cf-e895-422b-8064-ce6d652a85d1","Type":"ContainerStarted","Data":"e33a9dabd0f9169517526bac63b6ef7237b0e3ac0120a0efb07aeb141081eeab"} Jan 31 14:56:08 crc kubenswrapper[4751]: I0131 14:56:08.923942 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9w6rf" podStartSLOduration=6.122838999 podStartE2EDuration="8.923923646s" podCreationTimestamp="2026-01-31 14:56:00 +0000 UTC" firstStartedPulling="2026-01-31 14:56:04.864267446 +0000 UTC m=+869.238980331" lastFinishedPulling="2026-01-31 14:56:07.665352053 +0000 UTC m=+872.040064978" observedRunningTime="2026-01-31 14:56:08.92253374 +0000 UTC m=+873.297246625" watchObservedRunningTime="2026-01-31 14:56:08.923923646 +0000 UTC m=+873.298636531" Jan 31 14:56:09 crc kubenswrapper[4751]: I0131 14:56:09.207932 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" Jan 31 14:56:09 crc kubenswrapper[4751]: I0131 14:56:09.224031 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a525382f-29ee-4393-9e5b-1b3e989a1bc3-bundle\") pod \"a525382f-29ee-4393-9e5b-1b3e989a1bc3\" (UID: \"a525382f-29ee-4393-9e5b-1b3e989a1bc3\") " Jan 31 14:56:09 crc kubenswrapper[4751]: I0131 14:56:09.224195 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a525382f-29ee-4393-9e5b-1b3e989a1bc3-util\") pod \"a525382f-29ee-4393-9e5b-1b3e989a1bc3\" (UID: \"a525382f-29ee-4393-9e5b-1b3e989a1bc3\") " Jan 31 14:56:09 crc kubenswrapper[4751]: I0131 14:56:09.224249 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k9mlm\" (UniqueName: \"kubernetes.io/projected/a525382f-29ee-4393-9e5b-1b3e989a1bc3-kube-api-access-k9mlm\") pod \"a525382f-29ee-4393-9e5b-1b3e989a1bc3\" (UID: \"a525382f-29ee-4393-9e5b-1b3e989a1bc3\") " Jan 31 14:56:09 crc kubenswrapper[4751]: I0131 14:56:09.224557 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a525382f-29ee-4393-9e5b-1b3e989a1bc3-bundle" (OuterVolumeSpecName: "bundle") pod "a525382f-29ee-4393-9e5b-1b3e989a1bc3" (UID: "a525382f-29ee-4393-9e5b-1b3e989a1bc3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:56:09 crc kubenswrapper[4751]: I0131 14:56:09.224745 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a525382f-29ee-4393-9e5b-1b3e989a1bc3-util" (OuterVolumeSpecName: "util") pod "a525382f-29ee-4393-9e5b-1b3e989a1bc3" (UID: "a525382f-29ee-4393-9e5b-1b3e989a1bc3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:56:09 crc kubenswrapper[4751]: I0131 14:56:09.231334 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a525382f-29ee-4393-9e5b-1b3e989a1bc3-kube-api-access-k9mlm" (OuterVolumeSpecName: "kube-api-access-k9mlm") pod "a525382f-29ee-4393-9e5b-1b3e989a1bc3" (UID: "a525382f-29ee-4393-9e5b-1b3e989a1bc3"). InnerVolumeSpecName "kube-api-access-k9mlm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:56:09 crc kubenswrapper[4751]: I0131 14:56:09.325879 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/a525382f-29ee-4393-9e5b-1b3e989a1bc3-util\") on node \"crc\" DevicePath \"\"" Jan 31 14:56:09 crc kubenswrapper[4751]: I0131 14:56:09.325921 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k9mlm\" (UniqueName: \"kubernetes.io/projected/a525382f-29ee-4393-9e5b-1b3e989a1bc3-kube-api-access-k9mlm\") on node \"crc\" DevicePath \"\"" Jan 31 14:56:09 crc kubenswrapper[4751]: I0131 14:56:09.325938 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/a525382f-29ee-4393-9e5b-1b3e989a1bc3-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:56:09 crc kubenswrapper[4751]: I0131 14:56:09.909219 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" event={"ID":"a525382f-29ee-4393-9e5b-1b3e989a1bc3","Type":"ContainerDied","Data":"1f53cc0349cc95b3204177d9541764f06423422d60741c2505c0c3dc21ce5ef9"} Jan 31 14:56:09 crc kubenswrapper[4751]: I0131 14:56:09.909264 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1f53cc0349cc95b3204177d9541764f06423422d60741c2505c0c3dc21ce5ef9" Jan 31 14:56:09 crc kubenswrapper[4751]: I0131 14:56:09.909238 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb" Jan 31 14:56:10 crc kubenswrapper[4751]: I0131 14:56:10.673052 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:10 crc kubenswrapper[4751]: I0131 14:56:10.673378 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:11 crc kubenswrapper[4751]: I0131 14:56:11.735124 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-9w6rf" podUID="db9636cf-e895-422b-8064-ce6d652a85d1" containerName="registry-server" probeResult="failure" output=< Jan 31 14:56:11 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 31 14:56:11 crc kubenswrapper[4751]: > Jan 31 14:56:15 crc kubenswrapper[4751]: I0131 14:56:15.150855 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:56:15 crc kubenswrapper[4751]: I0131 14:56:15.240686 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 14:56:17 crc kubenswrapper[4751]: I0131 14:56:17.575611 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg"] Jan 31 14:56:17 crc kubenswrapper[4751]: E0131 14:56:17.576313 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf" containerName="mariadb-account-create-update" Jan 31 14:56:17 crc kubenswrapper[4751]: I0131 14:56:17.576335 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf" containerName="mariadb-account-create-update" Jan 31 14:56:17 crc kubenswrapper[4751]: E0131 14:56:17.576349 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a525382f-29ee-4393-9e5b-1b3e989a1bc3" containerName="extract" Jan 31 14:56:17 crc kubenswrapper[4751]: I0131 14:56:17.576360 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a525382f-29ee-4393-9e5b-1b3e989a1bc3" containerName="extract" Jan 31 14:56:17 crc kubenswrapper[4751]: E0131 14:56:17.576385 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a525382f-29ee-4393-9e5b-1b3e989a1bc3" containerName="pull" Jan 31 14:56:17 crc kubenswrapper[4751]: I0131 14:56:17.576395 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a525382f-29ee-4393-9e5b-1b3e989a1bc3" containerName="pull" Jan 31 14:56:17 crc kubenswrapper[4751]: E0131 14:56:17.576419 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a525382f-29ee-4393-9e5b-1b3e989a1bc3" containerName="util" Jan 31 14:56:17 crc kubenswrapper[4751]: I0131 14:56:17.576432 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a525382f-29ee-4393-9e5b-1b3e989a1bc3" containerName="util" Jan 31 14:56:17 crc kubenswrapper[4751]: I0131 14:56:17.576630 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a525382f-29ee-4393-9e5b-1b3e989a1bc3" containerName="extract" Jan 31 14:56:17 crc kubenswrapper[4751]: I0131 14:56:17.576646 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf" containerName="mariadb-account-create-update" Jan 31 14:56:17 crc kubenswrapper[4751]: I0131 14:56:17.577312 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg" Jan 31 14:56:17 crc kubenswrapper[4751]: I0131 14:56:17.579103 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"rabbitmq-cluster-operator-dockercfg-b8cw4" Jan 31 14:56:17 crc kubenswrapper[4751]: I0131 14:56:17.593405 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg"] Jan 31 14:56:17 crc kubenswrapper[4751]: I0131 14:56:17.736531 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x5bkt\" (UniqueName: \"kubernetes.io/projected/3b77f113-f8c0-47b8-ad79-d1be38bf6e09-kube-api-access-x5bkt\") pod \"rabbitmq-cluster-operator-779fc9694b-fnbvg\" (UID: \"3b77f113-f8c0-47b8-ad79-d1be38bf6e09\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg" Jan 31 14:56:17 crc kubenswrapper[4751]: I0131 14:56:17.838278 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x5bkt\" (UniqueName: \"kubernetes.io/projected/3b77f113-f8c0-47b8-ad79-d1be38bf6e09-kube-api-access-x5bkt\") pod \"rabbitmq-cluster-operator-779fc9694b-fnbvg\" (UID: \"3b77f113-f8c0-47b8-ad79-d1be38bf6e09\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg" Jan 31 14:56:17 crc kubenswrapper[4751]: I0131 14:56:17.869967 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x5bkt\" (UniqueName: \"kubernetes.io/projected/3b77f113-f8c0-47b8-ad79-d1be38bf6e09-kube-api-access-x5bkt\") pod \"rabbitmq-cluster-operator-779fc9694b-fnbvg\" (UID: \"3b77f113-f8c0-47b8-ad79-d1be38bf6e09\") " pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg" Jan 31 14:56:17 crc kubenswrapper[4751]: I0131 14:56:17.905443 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg" Jan 31 14:56:18 crc kubenswrapper[4751]: I0131 14:56:18.421998 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg"] Jan 31 14:56:18 crc kubenswrapper[4751]: I0131 14:56:18.503921 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:56:18 crc kubenswrapper[4751]: I0131 14:56:18.584792 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 14:56:18 crc kubenswrapper[4751]: I0131 14:56:18.980239 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg" event={"ID":"3b77f113-f8c0-47b8-ad79-d1be38bf6e09","Type":"ContainerStarted","Data":"9417f04e17815ef9de6ec5d2357c85d9f600b65c7a818fc63c494820d893f560"} Jan 31 14:56:20 crc kubenswrapper[4751]: I0131 14:56:20.740216 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:20 crc kubenswrapper[4751]: I0131 14:56:20.797224 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:22 crc kubenswrapper[4751]: I0131 14:56:22.014313 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg" event={"ID":"3b77f113-f8c0-47b8-ad79-d1be38bf6e09","Type":"ContainerStarted","Data":"669ca17a7f872d19f207a1db8b3eeef3d4392ee464f55983153e6452773c111f"} Jan 31 14:56:22 crc kubenswrapper[4751]: I0131 14:56:22.047511 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg" podStartSLOduration=1.7411855950000001 podStartE2EDuration="5.047473754s" podCreationTimestamp="2026-01-31 14:56:17 +0000 UTC" firstStartedPulling="2026-01-31 14:56:18.435774268 +0000 UTC m=+882.810487153" lastFinishedPulling="2026-01-31 14:56:21.742062407 +0000 UTC m=+886.116775312" observedRunningTime="2026-01-31 14:56:22.036791303 +0000 UTC m=+886.411504228" watchObservedRunningTime="2026-01-31 14:56:22.047473754 +0000 UTC m=+886.422186679" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.003876 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.004998 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.007669 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"rabbitmq-server-conf" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.008041 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-server-dockercfg-8vh75" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.008217 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-default-user" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.008352 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"rabbitmq-erlang-cookie" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.008709 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"rabbitmq-plugins-conf" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.015175 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.148827 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19317a08-b18b-42c9-bdc9-394e1e06257d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.149039 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.149300 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.153042 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-f145c232-830a-4841-bd1f-7c42e25cd443\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f145c232-830a-4841-bd1f-7c42e25cd443\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.153160 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19317a08-b18b-42c9-bdc9-394e1e06257d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.153310 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zrbjl\" (UniqueName: \"kubernetes.io/projected/19317a08-b18b-42c9-bdc9-394e1e06257d-kube-api-access-zrbjl\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.153370 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19317a08-b18b-42c9-bdc9-394e1e06257d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.153488 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.254602 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.254679 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-f145c232-830a-4841-bd1f-7c42e25cd443\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f145c232-830a-4841-bd1f-7c42e25cd443\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.255357 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19317a08-b18b-42c9-bdc9-394e1e06257d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.255433 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-erlang-cookie\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.255466 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zrbjl\" (UniqueName: \"kubernetes.io/projected/19317a08-b18b-42c9-bdc9-394e1e06257d-kube-api-access-zrbjl\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.255917 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19317a08-b18b-42c9-bdc9-394e1e06257d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.256035 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.256536 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-plugins\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.257056 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19317a08-b18b-42c9-bdc9-394e1e06257d-plugins-conf\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.257188 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19317a08-b18b-42c9-bdc9-394e1e06257d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.257241 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.258257 4751 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.258305 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-f145c232-830a-4841-bd1f-7c42e25cd443\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f145c232-830a-4841-bd1f-7c42e25cd443\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/477b9b8ecff0fd3c5085f9312279f3fdf5646254d348b830ad73ff5a0f99fc7f/globalmount\"" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.262963 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19317a08-b18b-42c9-bdc9-394e1e06257d-pod-info\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.265654 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19317a08-b18b-42c9-bdc9-394e1e06257d-erlang-cookie-secret\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.278428 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-confd\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.284132 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zrbjl\" (UniqueName: \"kubernetes.io/projected/19317a08-b18b-42c9-bdc9-394e1e06257d-kube-api-access-zrbjl\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.294979 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-f145c232-830a-4841-bd1f-7c42e25cd443\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f145c232-830a-4841-bd1f-7c42e25cd443\") pod \"rabbitmq-server-0\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.328098 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.646788 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 14:56:24 crc kubenswrapper[4751]: W0131 14:56:24.656211 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod19317a08_b18b_42c9_bdc9_394e1e06257d.slice/crio-f6c134f960dca8717c0eb288c9e0a54cef2dc5968f5f68b04744d850b9ec573e WatchSource:0}: Error finding container f6c134f960dca8717c0eb288c9e0a54cef2dc5968f5f68b04744d850b9ec573e: Status 404 returned error can't find the container with id f6c134f960dca8717c0eb288c9e0a54cef2dc5968f5f68b04744d850b9ec573e Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.709535 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9w6rf"] Jan 31 14:56:24 crc kubenswrapper[4751]: I0131 14:56:24.709751 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9w6rf" podUID="db9636cf-e895-422b-8064-ce6d652a85d1" containerName="registry-server" containerID="cri-o://e33a9dabd0f9169517526bac63b6ef7237b0e3ac0120a0efb07aeb141081eeab" gracePeriod=2 Jan 31 14:56:25 crc kubenswrapper[4751]: I0131 14:56:25.057140 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"19317a08-b18b-42c9-bdc9-394e1e06257d","Type":"ContainerStarted","Data":"f6c134f960dca8717c0eb288c9e0a54cef2dc5968f5f68b04744d850b9ec573e"} Jan 31 14:56:25 crc kubenswrapper[4751]: I0131 14:56:25.061447 4751 generic.go:334] "Generic (PLEG): container finished" podID="db9636cf-e895-422b-8064-ce6d652a85d1" containerID="e33a9dabd0f9169517526bac63b6ef7237b0e3ac0120a0efb07aeb141081eeab" exitCode=0 Jan 31 14:56:25 crc kubenswrapper[4751]: I0131 14:56:25.061495 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9w6rf" event={"ID":"db9636cf-e895-422b-8064-ce6d652a85d1","Type":"ContainerDied","Data":"e33a9dabd0f9169517526bac63b6ef7237b0e3ac0120a0efb07aeb141081eeab"} Jan 31 14:56:25 crc kubenswrapper[4751]: I0131 14:56:25.654211 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:25 crc kubenswrapper[4751]: I0131 14:56:25.679954 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9636cf-e895-422b-8064-ce6d652a85d1-catalog-content\") pod \"db9636cf-e895-422b-8064-ce6d652a85d1\" (UID: \"db9636cf-e895-422b-8064-ce6d652a85d1\") " Jan 31 14:56:25 crc kubenswrapper[4751]: I0131 14:56:25.680047 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9636cf-e895-422b-8064-ce6d652a85d1-utilities\") pod \"db9636cf-e895-422b-8064-ce6d652a85d1\" (UID: \"db9636cf-e895-422b-8064-ce6d652a85d1\") " Jan 31 14:56:25 crc kubenswrapper[4751]: I0131 14:56:25.680088 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w2b4z\" (UniqueName: \"kubernetes.io/projected/db9636cf-e895-422b-8064-ce6d652a85d1-kube-api-access-w2b4z\") pod \"db9636cf-e895-422b-8064-ce6d652a85d1\" (UID: \"db9636cf-e895-422b-8064-ce6d652a85d1\") " Jan 31 14:56:25 crc kubenswrapper[4751]: I0131 14:56:25.681206 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db9636cf-e895-422b-8064-ce6d652a85d1-utilities" (OuterVolumeSpecName: "utilities") pod "db9636cf-e895-422b-8064-ce6d652a85d1" (UID: "db9636cf-e895-422b-8064-ce6d652a85d1"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:56:25 crc kubenswrapper[4751]: I0131 14:56:25.685702 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/db9636cf-e895-422b-8064-ce6d652a85d1-kube-api-access-w2b4z" (OuterVolumeSpecName: "kube-api-access-w2b4z") pod "db9636cf-e895-422b-8064-ce6d652a85d1" (UID: "db9636cf-e895-422b-8064-ce6d652a85d1"). InnerVolumeSpecName "kube-api-access-w2b4z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:56:25 crc kubenswrapper[4751]: I0131 14:56:25.781653 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/db9636cf-e895-422b-8064-ce6d652a85d1-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:56:25 crc kubenswrapper[4751]: I0131 14:56:25.781952 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w2b4z\" (UniqueName: \"kubernetes.io/projected/db9636cf-e895-422b-8064-ce6d652a85d1-kube-api-access-w2b4z\") on node \"crc\" DevicePath \"\"" Jan 31 14:56:25 crc kubenswrapper[4751]: I0131 14:56:25.819789 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/db9636cf-e895-422b-8064-ce6d652a85d1-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "db9636cf-e895-422b-8064-ce6d652a85d1" (UID: "db9636cf-e895-422b-8064-ce6d652a85d1"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:56:25 crc kubenswrapper[4751]: I0131 14:56:25.883408 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/db9636cf-e895-422b-8064-ce6d652a85d1-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.071632 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9w6rf" event={"ID":"db9636cf-e895-422b-8064-ce6d652a85d1","Type":"ContainerDied","Data":"dc0b582d511f1669c6b4a4462f76d30ea3f273270449a3ea86a4620ceb3d1f30"} Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.071719 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9w6rf" Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.072104 4751 scope.go:117] "RemoveContainer" containerID="e33a9dabd0f9169517526bac63b6ef7237b0e3ac0120a0efb07aeb141081eeab" Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.099689 4751 scope.go:117] "RemoveContainer" containerID="72edda77bd270bc034f43aee7ab36fd6299606986815e322ded2d2de14aa053b" Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.102050 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9w6rf"] Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.108478 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9w6rf"] Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.128058 4751 scope.go:117] "RemoveContainer" containerID="526b390ea0ece0e6fceffb4c1922e3d8cdf235036daf439a7f234526b708d9ea" Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.413256 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="db9636cf-e895-422b-8064-ce6d652a85d1" path="/var/lib/kubelet/pods/db9636cf-e895-422b-8064-ce6d652a85d1/volumes" Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.743654 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-index-6bwnv"] Jan 31 14:56:26 crc kubenswrapper[4751]: E0131 14:56:26.745032 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9636cf-e895-422b-8064-ce6d652a85d1" containerName="registry-server" Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.745111 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9636cf-e895-422b-8064-ce6d652a85d1" containerName="registry-server" Jan 31 14:56:26 crc kubenswrapper[4751]: E0131 14:56:26.745161 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9636cf-e895-422b-8064-ce6d652a85d1" containerName="extract-utilities" Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.745184 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9636cf-e895-422b-8064-ce6d652a85d1" containerName="extract-utilities" Jan 31 14:56:26 crc kubenswrapper[4751]: E0131 14:56:26.745236 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="db9636cf-e895-422b-8064-ce6d652a85d1" containerName="extract-content" Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.745255 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="db9636cf-e895-422b-8064-ce6d652a85d1" containerName="extract-content" Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.746220 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="db9636cf-e895-422b-8064-ce6d652a85d1" containerName="registry-server" Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.748439 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-6bwnv" Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.753642 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-index-dockercfg-x7dlp" Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.776123 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-6bwnv"] Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.796139 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-msbp2\" (UniqueName: \"kubernetes.io/projected/08530f42-16c5-4253-a623-2a032aeb95a7-kube-api-access-msbp2\") pod \"keystone-operator-index-6bwnv\" (UID: \"08530f42-16c5-4253-a623-2a032aeb95a7\") " pod="openstack-operators/keystone-operator-index-6bwnv" Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.897294 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-msbp2\" (UniqueName: \"kubernetes.io/projected/08530f42-16c5-4253-a623-2a032aeb95a7-kube-api-access-msbp2\") pod \"keystone-operator-index-6bwnv\" (UID: \"08530f42-16c5-4253-a623-2a032aeb95a7\") " pod="openstack-operators/keystone-operator-index-6bwnv" Jan 31 14:56:26 crc kubenswrapper[4751]: I0131 14:56:26.915516 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-msbp2\" (UniqueName: \"kubernetes.io/projected/08530f42-16c5-4253-a623-2a032aeb95a7-kube-api-access-msbp2\") pod \"keystone-operator-index-6bwnv\" (UID: \"08530f42-16c5-4253-a623-2a032aeb95a7\") " pod="openstack-operators/keystone-operator-index-6bwnv" Jan 31 14:56:27 crc kubenswrapper[4751]: I0131 14:56:27.074008 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-6bwnv" Jan 31 14:56:27 crc kubenswrapper[4751]: I0131 14:56:27.503229 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-index-6bwnv"] Jan 31 14:56:28 crc kubenswrapper[4751]: I0131 14:56:28.091546 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-6bwnv" event={"ID":"08530f42-16c5-4253-a623-2a032aeb95a7","Type":"ContainerStarted","Data":"287dcdc51fb3cdd7484c91633318b58c60ad6d8b753d031c65761b77a7b8670b"} Jan 31 14:56:30 crc kubenswrapper[4751]: I0131 14:56:30.110487 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-6bwnv" event={"ID":"08530f42-16c5-4253-a623-2a032aeb95a7","Type":"ContainerStarted","Data":"3035639c5750cf779b9b57b5d0ade23abfc3c28de57f8e43e074a91f02a62e68"} Jan 31 14:56:30 crc kubenswrapper[4751]: I0131 14:56:30.132366 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-index-6bwnv" podStartSLOduration=3.17874397 podStartE2EDuration="4.132337466s" podCreationTimestamp="2026-01-31 14:56:26 +0000 UTC" firstStartedPulling="2026-01-31 14:56:27.52521991 +0000 UTC m=+891.899932835" lastFinishedPulling="2026-01-31 14:56:28.478813426 +0000 UTC m=+892.853526331" observedRunningTime="2026-01-31 14:56:30.123470623 +0000 UTC m=+894.498183518" watchObservedRunningTime="2026-01-31 14:56:30.132337466 +0000 UTC m=+894.507050381" Jan 31 14:56:35 crc kubenswrapper[4751]: I0131 14:56:35.144977 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"19317a08-b18b-42c9-bdc9-394e1e06257d","Type":"ContainerStarted","Data":"505748b1e10e777b66b173a6705d54ff333de5b60e6cd125a1cf81bd7167e586"} Jan 31 14:56:36 crc kubenswrapper[4751]: I0131 14:56:36.927831 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-g8hgc"] Jan 31 14:56:36 crc kubenswrapper[4751]: I0131 14:56:36.930289 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:36 crc kubenswrapper[4751]: I0131 14:56:36.951177 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8hgc"] Jan 31 14:56:37 crc kubenswrapper[4751]: I0131 14:56:37.048815 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb6fb532-641c-459e-bb99-ba0f9779510c-utilities\") pod \"redhat-marketplace-g8hgc\" (UID: \"eb6fb532-641c-459e-bb99-ba0f9779510c\") " pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:37 crc kubenswrapper[4751]: I0131 14:56:37.048886 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjhf4\" (UniqueName: \"kubernetes.io/projected/eb6fb532-641c-459e-bb99-ba0f9779510c-kube-api-access-pjhf4\") pod \"redhat-marketplace-g8hgc\" (UID: \"eb6fb532-641c-459e-bb99-ba0f9779510c\") " pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:37 crc kubenswrapper[4751]: I0131 14:56:37.048912 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb6fb532-641c-459e-bb99-ba0f9779510c-catalog-content\") pod \"redhat-marketplace-g8hgc\" (UID: \"eb6fb532-641c-459e-bb99-ba0f9779510c\") " pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:37 crc kubenswrapper[4751]: I0131 14:56:37.074572 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/keystone-operator-index-6bwnv" Jan 31 14:56:37 crc kubenswrapper[4751]: I0131 14:56:37.074604 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-index-6bwnv" Jan 31 14:56:37 crc kubenswrapper[4751]: I0131 14:56:37.099301 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/keystone-operator-index-6bwnv" Jan 31 14:56:37 crc kubenswrapper[4751]: I0131 14:56:37.150486 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb6fb532-641c-459e-bb99-ba0f9779510c-utilities\") pod \"redhat-marketplace-g8hgc\" (UID: \"eb6fb532-641c-459e-bb99-ba0f9779510c\") " pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:37 crc kubenswrapper[4751]: I0131 14:56:37.150594 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjhf4\" (UniqueName: \"kubernetes.io/projected/eb6fb532-641c-459e-bb99-ba0f9779510c-kube-api-access-pjhf4\") pod \"redhat-marketplace-g8hgc\" (UID: \"eb6fb532-641c-459e-bb99-ba0f9779510c\") " pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:37 crc kubenswrapper[4751]: I0131 14:56:37.150630 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb6fb532-641c-459e-bb99-ba0f9779510c-catalog-content\") pod \"redhat-marketplace-g8hgc\" (UID: \"eb6fb532-641c-459e-bb99-ba0f9779510c\") " pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:37 crc kubenswrapper[4751]: I0131 14:56:37.150897 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb6fb532-641c-459e-bb99-ba0f9779510c-utilities\") pod \"redhat-marketplace-g8hgc\" (UID: \"eb6fb532-641c-459e-bb99-ba0f9779510c\") " pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:37 crc kubenswrapper[4751]: I0131 14:56:37.151044 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb6fb532-641c-459e-bb99-ba0f9779510c-catalog-content\") pod \"redhat-marketplace-g8hgc\" (UID: \"eb6fb532-641c-459e-bb99-ba0f9779510c\") " pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:37 crc kubenswrapper[4751]: I0131 14:56:37.172786 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjhf4\" (UniqueName: \"kubernetes.io/projected/eb6fb532-641c-459e-bb99-ba0f9779510c-kube-api-access-pjhf4\") pod \"redhat-marketplace-g8hgc\" (UID: \"eb6fb532-641c-459e-bb99-ba0f9779510c\") " pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:37 crc kubenswrapper[4751]: I0131 14:56:37.197804 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-index-6bwnv" Jan 31 14:56:37 crc kubenswrapper[4751]: I0131 14:56:37.262677 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:37 crc kubenswrapper[4751]: I0131 14:56:37.711413 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8hgc"] Jan 31 14:56:37 crc kubenswrapper[4751]: W0131 14:56:37.719571 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb6fb532_641c_459e_bb99_ba0f9779510c.slice/crio-b08d3a83c80ca64edae98efdfdbc3ea7e3985a0df616ed7d528a097908ab573a WatchSource:0}: Error finding container b08d3a83c80ca64edae98efdfdbc3ea7e3985a0df616ed7d528a097908ab573a: Status 404 returned error can't find the container with id b08d3a83c80ca64edae98efdfdbc3ea7e3985a0df616ed7d528a097908ab573a Jan 31 14:56:38 crc kubenswrapper[4751]: I0131 14:56:38.171731 4751 generic.go:334] "Generic (PLEG): container finished" podID="eb6fb532-641c-459e-bb99-ba0f9779510c" containerID="f76a9ace8d656c05224f8d7e9efd34944bad63abdbcc558bcbf72f5ac86facb5" exitCode=0 Jan 31 14:56:38 crc kubenswrapper[4751]: I0131 14:56:38.172983 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8hgc" event={"ID":"eb6fb532-641c-459e-bb99-ba0f9779510c","Type":"ContainerDied","Data":"f76a9ace8d656c05224f8d7e9efd34944bad63abdbcc558bcbf72f5ac86facb5"} Jan 31 14:56:38 crc kubenswrapper[4751]: I0131 14:56:38.173011 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8hgc" event={"ID":"eb6fb532-641c-459e-bb99-ba0f9779510c","Type":"ContainerStarted","Data":"b08d3a83c80ca64edae98efdfdbc3ea7e3985a0df616ed7d528a097908ab573a"} Jan 31 14:56:39 crc kubenswrapper[4751]: I0131 14:56:39.180658 4751 generic.go:334] "Generic (PLEG): container finished" podID="eb6fb532-641c-459e-bb99-ba0f9779510c" containerID="573923f23b16312c13dc5ab0d41de7478d1a0111af903d7fde9c795304380992" exitCode=0 Jan 31 14:56:39 crc kubenswrapper[4751]: I0131 14:56:39.180892 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8hgc" event={"ID":"eb6fb532-641c-459e-bb99-ba0f9779510c","Type":"ContainerDied","Data":"573923f23b16312c13dc5ab0d41de7478d1a0111af903d7fde9c795304380992"} Jan 31 14:56:40 crc kubenswrapper[4751]: I0131 14:56:40.191509 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8hgc" event={"ID":"eb6fb532-641c-459e-bb99-ba0f9779510c","Type":"ContainerStarted","Data":"91f8565e18d5814616edf1f9308b1748710df7aa51f507a293df07392a1fe336"} Jan 31 14:56:40 crc kubenswrapper[4751]: I0131 14:56:40.214149 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-g8hgc" podStartSLOduration=2.588494662 podStartE2EDuration="4.214128406s" podCreationTimestamp="2026-01-31 14:56:36 +0000 UTC" firstStartedPulling="2026-01-31 14:56:38.173655881 +0000 UTC m=+902.548368766" lastFinishedPulling="2026-01-31 14:56:39.799289625 +0000 UTC m=+904.174002510" observedRunningTime="2026-01-31 14:56:40.208736964 +0000 UTC m=+904.583449869" watchObservedRunningTime="2026-01-31 14:56:40.214128406 +0000 UTC m=+904.588841301" Jan 31 14:56:41 crc kubenswrapper[4751]: I0131 14:56:41.967983 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr"] Jan 31 14:56:41 crc kubenswrapper[4751]: I0131 14:56:41.970194 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" Jan 31 14:56:41 crc kubenswrapper[4751]: I0131 14:56:41.976008 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wxkjx" Jan 31 14:56:41 crc kubenswrapper[4751]: I0131 14:56:41.979658 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr"] Jan 31 14:56:42 crc kubenswrapper[4751]: I0131 14:56:42.114457 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr\" (UID: \"772cd794-fe9a-4ac3-8df8-e7f29edb85bf\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" Jan 31 14:56:42 crc kubenswrapper[4751]: I0131 14:56:42.114521 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr\" (UID: \"772cd794-fe9a-4ac3-8df8-e7f29edb85bf\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" Jan 31 14:56:42 crc kubenswrapper[4751]: I0131 14:56:42.114578 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pss5r\" (UniqueName: \"kubernetes.io/projected/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-kube-api-access-pss5r\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr\" (UID: \"772cd794-fe9a-4ac3-8df8-e7f29edb85bf\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" Jan 31 14:56:42 crc kubenswrapper[4751]: I0131 14:56:42.216697 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr\" (UID: \"772cd794-fe9a-4ac3-8df8-e7f29edb85bf\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" Jan 31 14:56:42 crc kubenswrapper[4751]: I0131 14:56:42.216836 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr\" (UID: \"772cd794-fe9a-4ac3-8df8-e7f29edb85bf\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" Jan 31 14:56:42 crc kubenswrapper[4751]: I0131 14:56:42.217006 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pss5r\" (UniqueName: \"kubernetes.io/projected/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-kube-api-access-pss5r\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr\" (UID: \"772cd794-fe9a-4ac3-8df8-e7f29edb85bf\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" Jan 31 14:56:42 crc kubenswrapper[4751]: I0131 14:56:42.218030 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-bundle\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr\" (UID: \"772cd794-fe9a-4ac3-8df8-e7f29edb85bf\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" Jan 31 14:56:42 crc kubenswrapper[4751]: I0131 14:56:42.218322 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-util\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr\" (UID: \"772cd794-fe9a-4ac3-8df8-e7f29edb85bf\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" Jan 31 14:56:42 crc kubenswrapper[4751]: I0131 14:56:42.238306 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pss5r\" (UniqueName: \"kubernetes.io/projected/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-kube-api-access-pss5r\") pod \"b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr\" (UID: \"772cd794-fe9a-4ac3-8df8-e7f29edb85bf\") " pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" Jan 31 14:56:42 crc kubenswrapper[4751]: I0131 14:56:42.301940 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" Jan 31 14:56:42 crc kubenswrapper[4751]: I0131 14:56:42.778974 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr"] Jan 31 14:56:42 crc kubenswrapper[4751]: W0131 14:56:42.792214 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod772cd794_fe9a_4ac3_8df8_e7f29edb85bf.slice/crio-2fb806f8b06eb6a79ed37ad373eb1b349cbf24edb359d55111d6e03a3fda0ba6 WatchSource:0}: Error finding container 2fb806f8b06eb6a79ed37ad373eb1b349cbf24edb359d55111d6e03a3fda0ba6: Status 404 returned error can't find the container with id 2fb806f8b06eb6a79ed37ad373eb1b349cbf24edb359d55111d6e03a3fda0ba6 Jan 31 14:56:43 crc kubenswrapper[4751]: I0131 14:56:43.213652 4751 generic.go:334] "Generic (PLEG): container finished" podID="772cd794-fe9a-4ac3-8df8-e7f29edb85bf" containerID="8f2f8355ecce67c5c0aa186fe2a2c3a5d75143a19a9cc7d982cad7e44dc2d94f" exitCode=0 Jan 31 14:56:43 crc kubenswrapper[4751]: I0131 14:56:43.213863 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" event={"ID":"772cd794-fe9a-4ac3-8df8-e7f29edb85bf","Type":"ContainerDied","Data":"8f2f8355ecce67c5c0aa186fe2a2c3a5d75143a19a9cc7d982cad7e44dc2d94f"} Jan 31 14:56:43 crc kubenswrapper[4751]: I0131 14:56:43.213928 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" event={"ID":"772cd794-fe9a-4ac3-8df8-e7f29edb85bf","Type":"ContainerStarted","Data":"2fb806f8b06eb6a79ed37ad373eb1b349cbf24edb359d55111d6e03a3fda0ba6"} Jan 31 14:56:44 crc kubenswrapper[4751]: I0131 14:56:44.221176 4751 generic.go:334] "Generic (PLEG): container finished" podID="772cd794-fe9a-4ac3-8df8-e7f29edb85bf" containerID="24848de7678f7cd58f76b4f47400dce420906e54dfe8d1ef4c220211c4bbb57e" exitCode=0 Jan 31 14:56:44 crc kubenswrapper[4751]: I0131 14:56:44.221295 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" event={"ID":"772cd794-fe9a-4ac3-8df8-e7f29edb85bf","Type":"ContainerDied","Data":"24848de7678f7cd58f76b4f47400dce420906e54dfe8d1ef4c220211c4bbb57e"} Jan 31 14:56:45 crc kubenswrapper[4751]: I0131 14:56:45.231363 4751 generic.go:334] "Generic (PLEG): container finished" podID="772cd794-fe9a-4ac3-8df8-e7f29edb85bf" containerID="8784247046f02ab2d8c0a52ce8233e64d23a7cd286c98e45a4c36115e6daf6d3" exitCode=0 Jan 31 14:56:45 crc kubenswrapper[4751]: I0131 14:56:45.231407 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" event={"ID":"772cd794-fe9a-4ac3-8df8-e7f29edb85bf","Type":"ContainerDied","Data":"8784247046f02ab2d8c0a52ce8233e64d23a7cd286c98e45a4c36115e6daf6d3"} Jan 31 14:56:46 crc kubenswrapper[4751]: I0131 14:56:46.519894 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" Jan 31 14:56:46 crc kubenswrapper[4751]: I0131 14:56:46.698272 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-util\") pod \"772cd794-fe9a-4ac3-8df8-e7f29edb85bf\" (UID: \"772cd794-fe9a-4ac3-8df8-e7f29edb85bf\") " Jan 31 14:56:46 crc kubenswrapper[4751]: I0131 14:56:46.698385 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-bundle\") pod \"772cd794-fe9a-4ac3-8df8-e7f29edb85bf\" (UID: \"772cd794-fe9a-4ac3-8df8-e7f29edb85bf\") " Jan 31 14:56:46 crc kubenswrapper[4751]: I0131 14:56:46.698540 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pss5r\" (UniqueName: \"kubernetes.io/projected/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-kube-api-access-pss5r\") pod \"772cd794-fe9a-4ac3-8df8-e7f29edb85bf\" (UID: \"772cd794-fe9a-4ac3-8df8-e7f29edb85bf\") " Jan 31 14:56:46 crc kubenswrapper[4751]: I0131 14:56:46.700231 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-bundle" (OuterVolumeSpecName: "bundle") pod "772cd794-fe9a-4ac3-8df8-e7f29edb85bf" (UID: "772cd794-fe9a-4ac3-8df8-e7f29edb85bf"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:56:46 crc kubenswrapper[4751]: I0131 14:56:46.705057 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-kube-api-access-pss5r" (OuterVolumeSpecName: "kube-api-access-pss5r") pod "772cd794-fe9a-4ac3-8df8-e7f29edb85bf" (UID: "772cd794-fe9a-4ac3-8df8-e7f29edb85bf"). InnerVolumeSpecName "kube-api-access-pss5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:56:46 crc kubenswrapper[4751]: I0131 14:56:46.731058 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-util" (OuterVolumeSpecName: "util") pod "772cd794-fe9a-4ac3-8df8-e7f29edb85bf" (UID: "772cd794-fe9a-4ac3-8df8-e7f29edb85bf"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:56:46 crc kubenswrapper[4751]: I0131 14:56:46.800554 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:56:46 crc kubenswrapper[4751]: I0131 14:56:46.800590 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pss5r\" (UniqueName: \"kubernetes.io/projected/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-kube-api-access-pss5r\") on node \"crc\" DevicePath \"\"" Jan 31 14:56:46 crc kubenswrapper[4751]: I0131 14:56:46.800603 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/772cd794-fe9a-4ac3-8df8-e7f29edb85bf-util\") on node \"crc\" DevicePath \"\"" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.246150 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" event={"ID":"772cd794-fe9a-4ac3-8df8-e7f29edb85bf","Type":"ContainerDied","Data":"2fb806f8b06eb6a79ed37ad373eb1b349cbf24edb359d55111d6e03a3fda0ba6"} Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.246463 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2fb806f8b06eb6a79ed37ad373eb1b349cbf24edb359d55111d6e03a3fda0ba6" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.246212 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.263183 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.263366 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.305171 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.527182 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-m96j8"] Jan 31 14:56:47 crc kubenswrapper[4751]: E0131 14:56:47.527478 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772cd794-fe9a-4ac3-8df8-e7f29edb85bf" containerName="extract" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.527495 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="772cd794-fe9a-4ac3-8df8-e7f29edb85bf" containerName="extract" Jan 31 14:56:47 crc kubenswrapper[4751]: E0131 14:56:47.527517 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772cd794-fe9a-4ac3-8df8-e7f29edb85bf" containerName="pull" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.527527 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="772cd794-fe9a-4ac3-8df8-e7f29edb85bf" containerName="pull" Jan 31 14:56:47 crc kubenswrapper[4751]: E0131 14:56:47.527547 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="772cd794-fe9a-4ac3-8df8-e7f29edb85bf" containerName="util" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.527555 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="772cd794-fe9a-4ac3-8df8-e7f29edb85bf" containerName="util" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.527697 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="772cd794-fe9a-4ac3-8df8-e7f29edb85bf" containerName="extract" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.528773 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.548425 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m96j8"] Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.711102 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-catalog-content\") pod \"community-operators-m96j8\" (UID: \"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40\") " pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.711164 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-utilities\") pod \"community-operators-m96j8\" (UID: \"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40\") " pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.711334 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7lgqw\" (UniqueName: \"kubernetes.io/projected/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-kube-api-access-7lgqw\") pod \"community-operators-m96j8\" (UID: \"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40\") " pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.812975 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7lgqw\" (UniqueName: \"kubernetes.io/projected/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-kube-api-access-7lgqw\") pod \"community-operators-m96j8\" (UID: \"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40\") " pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.813113 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-catalog-content\") pod \"community-operators-m96j8\" (UID: \"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40\") " pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.813138 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-utilities\") pod \"community-operators-m96j8\" (UID: \"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40\") " pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.813525 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-utilities\") pod \"community-operators-m96j8\" (UID: \"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40\") " pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.813652 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-catalog-content\") pod \"community-operators-m96j8\" (UID: \"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40\") " pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.833310 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7lgqw\" (UniqueName: \"kubernetes.io/projected/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-kube-api-access-7lgqw\") pod \"community-operators-m96j8\" (UID: \"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40\") " pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:56:47 crc kubenswrapper[4751]: I0131 14:56:47.845740 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:56:48 crc kubenswrapper[4751]: I0131 14:56:48.290344 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-m96j8"] Jan 31 14:56:48 crc kubenswrapper[4751]: I0131 14:56:48.319582 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:49 crc kubenswrapper[4751]: I0131 14:56:49.259446 4751 generic.go:334] "Generic (PLEG): container finished" podID="cbf9b8e1-7d1f-4fe4-9111-c040d8842d40" containerID="c5d30bd3425343861aefae2acc945d17403c59649b3737361473864cd06659ea" exitCode=0 Jan 31 14:56:49 crc kubenswrapper[4751]: I0131 14:56:49.259506 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m96j8" event={"ID":"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40","Type":"ContainerDied","Data":"c5d30bd3425343861aefae2acc945d17403c59649b3737361473864cd06659ea"} Jan 31 14:56:49 crc kubenswrapper[4751]: I0131 14:56:49.259895 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m96j8" event={"ID":"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40","Type":"ContainerStarted","Data":"7a18a3e9ad73c9dedb3fcbe2dae3a06a1766a1c7a3e3e74f8e52e63336ce4c6a"} Jan 31 14:56:49 crc kubenswrapper[4751]: I0131 14:56:49.736966 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-jqv5m"] Jan 31 14:56:49 crc kubenswrapper[4751]: I0131 14:56:49.797760 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqv5m"] Jan 31 14:56:49 crc kubenswrapper[4751]: I0131 14:56:49.797864 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:56:49 crc kubenswrapper[4751]: I0131 14:56:49.946500 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10399bf7-0161-488c-8001-e6ba927889e5-utilities\") pod \"certified-operators-jqv5m\" (UID: \"10399bf7-0161-488c-8001-e6ba927889e5\") " pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:56:49 crc kubenswrapper[4751]: I0131 14:56:49.946557 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10399bf7-0161-488c-8001-e6ba927889e5-catalog-content\") pod \"certified-operators-jqv5m\" (UID: \"10399bf7-0161-488c-8001-e6ba927889e5\") " pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:56:49 crc kubenswrapper[4751]: I0131 14:56:49.946595 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqkft\" (UniqueName: \"kubernetes.io/projected/10399bf7-0161-488c-8001-e6ba927889e5-kube-api-access-cqkft\") pod \"certified-operators-jqv5m\" (UID: \"10399bf7-0161-488c-8001-e6ba927889e5\") " pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:56:50 crc kubenswrapper[4751]: I0131 14:56:50.048384 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqkft\" (UniqueName: \"kubernetes.io/projected/10399bf7-0161-488c-8001-e6ba927889e5-kube-api-access-cqkft\") pod \"certified-operators-jqv5m\" (UID: \"10399bf7-0161-488c-8001-e6ba927889e5\") " pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:56:50 crc kubenswrapper[4751]: I0131 14:56:50.048489 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10399bf7-0161-488c-8001-e6ba927889e5-utilities\") pod \"certified-operators-jqv5m\" (UID: \"10399bf7-0161-488c-8001-e6ba927889e5\") " pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:56:50 crc kubenswrapper[4751]: I0131 14:56:50.048519 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10399bf7-0161-488c-8001-e6ba927889e5-catalog-content\") pod \"certified-operators-jqv5m\" (UID: \"10399bf7-0161-488c-8001-e6ba927889e5\") " pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:56:50 crc kubenswrapper[4751]: I0131 14:56:50.048907 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10399bf7-0161-488c-8001-e6ba927889e5-catalog-content\") pod \"certified-operators-jqv5m\" (UID: \"10399bf7-0161-488c-8001-e6ba927889e5\") " pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:56:50 crc kubenswrapper[4751]: I0131 14:56:50.049303 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10399bf7-0161-488c-8001-e6ba927889e5-utilities\") pod \"certified-operators-jqv5m\" (UID: \"10399bf7-0161-488c-8001-e6ba927889e5\") " pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:56:50 crc kubenswrapper[4751]: I0131 14:56:50.067480 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqkft\" (UniqueName: \"kubernetes.io/projected/10399bf7-0161-488c-8001-e6ba927889e5-kube-api-access-cqkft\") pod \"certified-operators-jqv5m\" (UID: \"10399bf7-0161-488c-8001-e6ba927889e5\") " pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:56:50 crc kubenswrapper[4751]: I0131 14:56:50.141763 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:56:50 crc kubenswrapper[4751]: I0131 14:56:50.582650 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-jqv5m"] Jan 31 14:56:51 crc kubenswrapper[4751]: I0131 14:56:51.279008 4751 generic.go:334] "Generic (PLEG): container finished" podID="10399bf7-0161-488c-8001-e6ba927889e5" containerID="e8bb73c8fea313592a221df9e5dbff86d9e1c2c8a380afe92fecdcb99f557785" exitCode=0 Jan 31 14:56:51 crc kubenswrapper[4751]: I0131 14:56:51.279082 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqv5m" event={"ID":"10399bf7-0161-488c-8001-e6ba927889e5","Type":"ContainerDied","Data":"e8bb73c8fea313592a221df9e5dbff86d9e1c2c8a380afe92fecdcb99f557785"} Jan 31 14:56:51 crc kubenswrapper[4751]: I0131 14:56:51.279107 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqv5m" event={"ID":"10399bf7-0161-488c-8001-e6ba927889e5","Type":"ContainerStarted","Data":"1db0e98d5840943d47fedc71d5069e41ebcf9dcd1cc035ff41c419ba526b7bb7"} Jan 31 14:56:51 crc kubenswrapper[4751]: I0131 14:56:51.284388 4751 generic.go:334] "Generic (PLEG): container finished" podID="cbf9b8e1-7d1f-4fe4-9111-c040d8842d40" containerID="0a2dcc31122c7c5482843a5e80399a6846c7271da25c796eef9ce298a6180701" exitCode=0 Jan 31 14:56:51 crc kubenswrapper[4751]: I0131 14:56:51.284431 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m96j8" event={"ID":"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40","Type":"ContainerDied","Data":"0a2dcc31122c7c5482843a5e80399a6846c7271da25c796eef9ce298a6180701"} Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.112158 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8hgc"] Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.112672 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-g8hgc" podUID="eb6fb532-641c-459e-bb99-ba0f9779510c" containerName="registry-server" containerID="cri-o://91f8565e18d5814616edf1f9308b1748710df7aa51f507a293df07392a1fe336" gracePeriod=2 Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.292771 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqv5m" event={"ID":"10399bf7-0161-488c-8001-e6ba927889e5","Type":"ContainerStarted","Data":"9f513ef2d1f9802f83bbe5839191b499dfe0fb39587e4e549696c1f2a0b3b5c7"} Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.296169 4751 generic.go:334] "Generic (PLEG): container finished" podID="eb6fb532-641c-459e-bb99-ba0f9779510c" containerID="91f8565e18d5814616edf1f9308b1748710df7aa51f507a293df07392a1fe336" exitCode=0 Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.296249 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8hgc" event={"ID":"eb6fb532-641c-459e-bb99-ba0f9779510c","Type":"ContainerDied","Data":"91f8565e18d5814616edf1f9308b1748710df7aa51f507a293df07392a1fe336"} Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.299397 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m96j8" event={"ID":"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40","Type":"ContainerStarted","Data":"8f5e6c80881d23c78dc00e4be207273e5eb7f1474c90cd90d5b02783a4206916"} Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.330342 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-m96j8" podStartSLOduration=2.742024047 podStartE2EDuration="5.330328138s" podCreationTimestamp="2026-01-31 14:56:47 +0000 UTC" firstStartedPulling="2026-01-31 14:56:49.263432813 +0000 UTC m=+913.638145708" lastFinishedPulling="2026-01-31 14:56:51.851736874 +0000 UTC m=+916.226449799" observedRunningTime="2026-01-31 14:56:52.327888464 +0000 UTC m=+916.702601349" watchObservedRunningTime="2026-01-31 14:56:52.330328138 +0000 UTC m=+916.705041023" Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.569706 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.683039 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjhf4\" (UniqueName: \"kubernetes.io/projected/eb6fb532-641c-459e-bb99-ba0f9779510c-kube-api-access-pjhf4\") pod \"eb6fb532-641c-459e-bb99-ba0f9779510c\" (UID: \"eb6fb532-641c-459e-bb99-ba0f9779510c\") " Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.683212 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb6fb532-641c-459e-bb99-ba0f9779510c-catalog-content\") pod \"eb6fb532-641c-459e-bb99-ba0f9779510c\" (UID: \"eb6fb532-641c-459e-bb99-ba0f9779510c\") " Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.683239 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb6fb532-641c-459e-bb99-ba0f9779510c-utilities\") pod \"eb6fb532-641c-459e-bb99-ba0f9779510c\" (UID: \"eb6fb532-641c-459e-bb99-ba0f9779510c\") " Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.684021 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb6fb532-641c-459e-bb99-ba0f9779510c-utilities" (OuterVolumeSpecName: "utilities") pod "eb6fb532-641c-459e-bb99-ba0f9779510c" (UID: "eb6fb532-641c-459e-bb99-ba0f9779510c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.689789 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb6fb532-641c-459e-bb99-ba0f9779510c-kube-api-access-pjhf4" (OuterVolumeSpecName: "kube-api-access-pjhf4") pod "eb6fb532-641c-459e-bb99-ba0f9779510c" (UID: "eb6fb532-641c-459e-bb99-ba0f9779510c"). InnerVolumeSpecName "kube-api-access-pjhf4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.703389 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eb6fb532-641c-459e-bb99-ba0f9779510c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "eb6fb532-641c-459e-bb99-ba0f9779510c" (UID: "eb6fb532-641c-459e-bb99-ba0f9779510c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.785177 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/eb6fb532-641c-459e-bb99-ba0f9779510c-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.785402 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/eb6fb532-641c-459e-bb99-ba0f9779510c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:56:52 crc kubenswrapper[4751]: I0131 14:56:52.785462 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjhf4\" (UniqueName: \"kubernetes.io/projected/eb6fb532-641c-459e-bb99-ba0f9779510c-kube-api-access-pjhf4\") on node \"crc\" DevicePath \"\"" Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.306945 4751 generic.go:334] "Generic (PLEG): container finished" podID="10399bf7-0161-488c-8001-e6ba927889e5" containerID="9f513ef2d1f9802f83bbe5839191b499dfe0fb39587e4e549696c1f2a0b3b5c7" exitCode=0 Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.306979 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqv5m" event={"ID":"10399bf7-0161-488c-8001-e6ba927889e5","Type":"ContainerDied","Data":"9f513ef2d1f9802f83bbe5839191b499dfe0fb39587e4e549696c1f2a0b3b5c7"} Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.309722 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-g8hgc" Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.317480 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-g8hgc" event={"ID":"eb6fb532-641c-459e-bb99-ba0f9779510c","Type":"ContainerDied","Data":"b08d3a83c80ca64edae98efdfdbc3ea7e3985a0df616ed7d528a097908ab573a"} Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.317544 4751 scope.go:117] "RemoveContainer" containerID="91f8565e18d5814616edf1f9308b1748710df7aa51f507a293df07392a1fe336" Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.348754 4751 scope.go:117] "RemoveContainer" containerID="573923f23b16312c13dc5ab0d41de7478d1a0111af903d7fde9c795304380992" Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.374310 4751 scope.go:117] "RemoveContainer" containerID="f76a9ace8d656c05224f8d7e9efd34944bad63abdbcc558bcbf72f5ac86facb5" Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.381593 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8hgc"] Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.388909 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-g8hgc"] Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.724167 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq"] Jan 31 14:56:53 crc kubenswrapper[4751]: E0131 14:56:53.724491 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6fb532-641c-459e-bb99-ba0f9779510c" containerName="registry-server" Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.724513 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6fb532-641c-459e-bb99-ba0f9779510c" containerName="registry-server" Jan 31 14:56:53 crc kubenswrapper[4751]: E0131 14:56:53.724536 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6fb532-641c-459e-bb99-ba0f9779510c" containerName="extract-utilities" Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.724547 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6fb532-641c-459e-bb99-ba0f9779510c" containerName="extract-utilities" Jan 31 14:56:53 crc kubenswrapper[4751]: E0131 14:56:53.724563 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb6fb532-641c-459e-bb99-ba0f9779510c" containerName="extract-content" Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.724570 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb6fb532-641c-459e-bb99-ba0f9779510c" containerName="extract-content" Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.724707 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb6fb532-641c-459e-bb99-ba0f9779510c" containerName="registry-server" Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.725282 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.727434 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-service-cert" Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.727566 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"keystone-operator-controller-manager-dockercfg-fpgv8" Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.737788 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq"] Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.900818 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49ea8aae-ad89-4383-8f2f-ba35872fd605-webhook-cert\") pod \"keystone-operator-controller-manager-7f68887647-qvqrq\" (UID: \"49ea8aae-ad89-4383-8f2f-ba35872fd605\") " pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.901124 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49ea8aae-ad89-4383-8f2f-ba35872fd605-apiservice-cert\") pod \"keystone-operator-controller-manager-7f68887647-qvqrq\" (UID: \"49ea8aae-ad89-4383-8f2f-ba35872fd605\") " pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" Jan 31 14:56:53 crc kubenswrapper[4751]: I0131 14:56:53.901236 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvdb6\" (UniqueName: \"kubernetes.io/projected/49ea8aae-ad89-4383-8f2f-ba35872fd605-kube-api-access-mvdb6\") pod \"keystone-operator-controller-manager-7f68887647-qvqrq\" (UID: \"49ea8aae-ad89-4383-8f2f-ba35872fd605\") " pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" Jan 31 14:56:54 crc kubenswrapper[4751]: I0131 14:56:54.002734 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49ea8aae-ad89-4383-8f2f-ba35872fd605-apiservice-cert\") pod \"keystone-operator-controller-manager-7f68887647-qvqrq\" (UID: \"49ea8aae-ad89-4383-8f2f-ba35872fd605\") " pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" Jan 31 14:56:54 crc kubenswrapper[4751]: I0131 14:56:54.002824 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mvdb6\" (UniqueName: \"kubernetes.io/projected/49ea8aae-ad89-4383-8f2f-ba35872fd605-kube-api-access-mvdb6\") pod \"keystone-operator-controller-manager-7f68887647-qvqrq\" (UID: \"49ea8aae-ad89-4383-8f2f-ba35872fd605\") " pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" Jan 31 14:56:54 crc kubenswrapper[4751]: I0131 14:56:54.002892 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49ea8aae-ad89-4383-8f2f-ba35872fd605-webhook-cert\") pod \"keystone-operator-controller-manager-7f68887647-qvqrq\" (UID: \"49ea8aae-ad89-4383-8f2f-ba35872fd605\") " pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" Jan 31 14:56:54 crc kubenswrapper[4751]: I0131 14:56:54.008557 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49ea8aae-ad89-4383-8f2f-ba35872fd605-webhook-cert\") pod \"keystone-operator-controller-manager-7f68887647-qvqrq\" (UID: \"49ea8aae-ad89-4383-8f2f-ba35872fd605\") " pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" Jan 31 14:56:54 crc kubenswrapper[4751]: I0131 14:56:54.008723 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49ea8aae-ad89-4383-8f2f-ba35872fd605-apiservice-cert\") pod \"keystone-operator-controller-manager-7f68887647-qvqrq\" (UID: \"49ea8aae-ad89-4383-8f2f-ba35872fd605\") " pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" Jan 31 14:56:54 crc kubenswrapper[4751]: I0131 14:56:54.019661 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvdb6\" (UniqueName: \"kubernetes.io/projected/49ea8aae-ad89-4383-8f2f-ba35872fd605-kube-api-access-mvdb6\") pod \"keystone-operator-controller-manager-7f68887647-qvqrq\" (UID: \"49ea8aae-ad89-4383-8f2f-ba35872fd605\") " pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" Jan 31 14:56:54 crc kubenswrapper[4751]: I0131 14:56:54.050608 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" Jan 31 14:56:54 crc kubenswrapper[4751]: I0131 14:56:54.322033 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqv5m" event={"ID":"10399bf7-0161-488c-8001-e6ba927889e5","Type":"ContainerStarted","Data":"d6f182b4c85d9e139e51b5b38efb79b57d02af9a0357ca70a93a9631b5745566"} Jan 31 14:56:54 crc kubenswrapper[4751]: I0131 14:56:54.350447 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-jqv5m" podStartSLOduration=2.787879903 podStartE2EDuration="5.350425956s" podCreationTimestamp="2026-01-31 14:56:49 +0000 UTC" firstStartedPulling="2026-01-31 14:56:51.281017395 +0000 UTC m=+915.655730280" lastFinishedPulling="2026-01-31 14:56:53.843563448 +0000 UTC m=+918.218276333" observedRunningTime="2026-01-31 14:56:54.346699998 +0000 UTC m=+918.721412903" watchObservedRunningTime="2026-01-31 14:56:54.350425956 +0000 UTC m=+918.725138851" Jan 31 14:56:54 crc kubenswrapper[4751]: I0131 14:56:54.416935 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb6fb532-641c-459e-bb99-ba0f9779510c" path="/var/lib/kubelet/pods/eb6fb532-641c-459e-bb99-ba0f9779510c/volumes" Jan 31 14:56:54 crc kubenswrapper[4751]: I0131 14:56:54.539925 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq"] Jan 31 14:56:54 crc kubenswrapper[4751]: W0131 14:56:54.546307 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod49ea8aae_ad89_4383_8f2f_ba35872fd605.slice/crio-57b02fb0aa9fe148da3716e9376b1f52be8527b141c4d304bf8040ea0b69e451 WatchSource:0}: Error finding container 57b02fb0aa9fe148da3716e9376b1f52be8527b141c4d304bf8040ea0b69e451: Status 404 returned error can't find the container with id 57b02fb0aa9fe148da3716e9376b1f52be8527b141c4d304bf8040ea0b69e451 Jan 31 14:56:55 crc kubenswrapper[4751]: I0131 14:56:55.372977 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" event={"ID":"49ea8aae-ad89-4383-8f2f-ba35872fd605","Type":"ContainerStarted","Data":"57b02fb0aa9fe148da3716e9376b1f52be8527b141c4d304bf8040ea0b69e451"} Jan 31 14:56:57 crc kubenswrapper[4751]: I0131 14:56:57.846255 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:56:57 crc kubenswrapper[4751]: I0131 14:56:57.846576 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:56:57 crc kubenswrapper[4751]: I0131 14:56:57.886282 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:56:58 crc kubenswrapper[4751]: I0131 14:56:58.449506 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:56:59 crc kubenswrapper[4751]: I0131 14:56:59.402298 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" event={"ID":"49ea8aae-ad89-4383-8f2f-ba35872fd605","Type":"ContainerStarted","Data":"0c37d2b2bcc47557f4028d9e251b0db237f4b56ff1d49ca666627d0449655ab2"} Jan 31 14:56:59 crc kubenswrapper[4751]: I0131 14:56:59.422120 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" podStartSLOduration=1.859547294 podStartE2EDuration="6.422101378s" podCreationTimestamp="2026-01-31 14:56:53 +0000 UTC" firstStartedPulling="2026-01-31 14:56:54.549121985 +0000 UTC m=+918.923834880" lastFinishedPulling="2026-01-31 14:56:59.111676079 +0000 UTC m=+923.486388964" observedRunningTime="2026-01-31 14:56:59.419689264 +0000 UTC m=+923.794402179" watchObservedRunningTime="2026-01-31 14:56:59.422101378 +0000 UTC m=+923.796814273" Jan 31 14:57:00 crc kubenswrapper[4751]: I0131 14:57:00.142117 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:57:00 crc kubenswrapper[4751]: I0131 14:57:00.142177 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:57:00 crc kubenswrapper[4751]: I0131 14:57:00.217144 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:57:00 crc kubenswrapper[4751]: I0131 14:57:00.413764 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" Jan 31 14:57:00 crc kubenswrapper[4751]: I0131 14:57:00.459790 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:57:02 crc kubenswrapper[4751]: I0131 14:57:02.911761 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m96j8"] Jan 31 14:57:02 crc kubenswrapper[4751]: I0131 14:57:02.912243 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-m96j8" podUID="cbf9b8e1-7d1f-4fe4-9111-c040d8842d40" containerName="registry-server" containerID="cri-o://8f5e6c80881d23c78dc00e4be207273e5eb7f1474c90cd90d5b02783a4206916" gracePeriod=2 Jan 31 14:57:04 crc kubenswrapper[4751]: I0131 14:57:04.055023 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" Jan 31 14:57:04 crc kubenswrapper[4751]: I0131 14:57:04.434638 4751 generic.go:334] "Generic (PLEG): container finished" podID="cbf9b8e1-7d1f-4fe4-9111-c040d8842d40" containerID="8f5e6c80881d23c78dc00e4be207273e5eb7f1474c90cd90d5b02783a4206916" exitCode=0 Jan 31 14:57:04 crc kubenswrapper[4751]: I0131 14:57:04.434689 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m96j8" event={"ID":"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40","Type":"ContainerDied","Data":"8f5e6c80881d23c78dc00e4be207273e5eb7f1474c90cd90d5b02783a4206916"} Jan 31 14:57:04 crc kubenswrapper[4751]: I0131 14:57:04.434713 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-m96j8" event={"ID":"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40","Type":"ContainerDied","Data":"7a18a3e9ad73c9dedb3fcbe2dae3a06a1766a1c7a3e3e74f8e52e63336ce4c6a"} Jan 31 14:57:04 crc kubenswrapper[4751]: I0131 14:57:04.434724 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7a18a3e9ad73c9dedb3fcbe2dae3a06a1766a1c7a3e3e74f8e52e63336ce4c6a" Jan 31 14:57:04 crc kubenswrapper[4751]: I0131 14:57:04.439382 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:57:04 crc kubenswrapper[4751]: I0131 14:57:04.524956 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-utilities\") pod \"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40\" (UID: \"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40\") " Jan 31 14:57:04 crc kubenswrapper[4751]: I0131 14:57:04.525052 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-catalog-content\") pod \"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40\" (UID: \"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40\") " Jan 31 14:57:04 crc kubenswrapper[4751]: I0131 14:57:04.525095 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7lgqw\" (UniqueName: \"kubernetes.io/projected/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-kube-api-access-7lgqw\") pod \"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40\" (UID: \"cbf9b8e1-7d1f-4fe4-9111-c040d8842d40\") " Jan 31 14:57:04 crc kubenswrapper[4751]: I0131 14:57:04.527315 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-utilities" (OuterVolumeSpecName: "utilities") pod "cbf9b8e1-7d1f-4fe4-9111-c040d8842d40" (UID: "cbf9b8e1-7d1f-4fe4-9111-c040d8842d40"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:57:04 crc kubenswrapper[4751]: I0131 14:57:04.544850 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-kube-api-access-7lgqw" (OuterVolumeSpecName: "kube-api-access-7lgqw") pod "cbf9b8e1-7d1f-4fe4-9111-c040d8842d40" (UID: "cbf9b8e1-7d1f-4fe4-9111-c040d8842d40"). InnerVolumeSpecName "kube-api-access-7lgqw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:04 crc kubenswrapper[4751]: I0131 14:57:04.590760 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cbf9b8e1-7d1f-4fe4-9111-c040d8842d40" (UID: "cbf9b8e1-7d1f-4fe4-9111-c040d8842d40"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:57:04 crc kubenswrapper[4751]: I0131 14:57:04.626453 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:04 crc kubenswrapper[4751]: I0131 14:57:04.626489 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7lgqw\" (UniqueName: \"kubernetes.io/projected/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-kube-api-access-7lgqw\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:04 crc kubenswrapper[4751]: I0131 14:57:04.626503 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:05 crc kubenswrapper[4751]: I0131 14:57:05.439901 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-m96j8" Jan 31 14:57:05 crc kubenswrapper[4751]: I0131 14:57:05.497159 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-m96j8"] Jan 31 14:57:05 crc kubenswrapper[4751]: I0131 14:57:05.505174 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-m96j8"] Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.309881 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqv5m"] Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.310126 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-jqv5m" podUID="10399bf7-0161-488c-8001-e6ba927889e5" containerName="registry-server" containerID="cri-o://d6f182b4c85d9e139e51b5b38efb79b57d02af9a0357ca70a93a9631b5745566" gracePeriod=2 Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.411963 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbf9b8e1-7d1f-4fe4-9111-c040d8842d40" path="/var/lib/kubelet/pods/cbf9b8e1-7d1f-4fe4-9111-c040d8842d40/volumes" Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.446686 4751 generic.go:334] "Generic (PLEG): container finished" podID="10399bf7-0161-488c-8001-e6ba927889e5" containerID="d6f182b4c85d9e139e51b5b38efb79b57d02af9a0357ca70a93a9631b5745566" exitCode=0 Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.446767 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqv5m" event={"ID":"10399bf7-0161-488c-8001-e6ba927889e5","Type":"ContainerDied","Data":"d6f182b4c85d9e139e51b5b38efb79b57d02af9a0357ca70a93a9631b5745566"} Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.447923 4751 generic.go:334] "Generic (PLEG): container finished" podID="19317a08-b18b-42c9-bdc9-394e1e06257d" containerID="505748b1e10e777b66b173a6705d54ff333de5b60e6cd125a1cf81bd7167e586" exitCode=0 Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.447961 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"19317a08-b18b-42c9-bdc9-394e1e06257d","Type":"ContainerDied","Data":"505748b1e10e777b66b173a6705d54ff333de5b60e6cd125a1cf81bd7167e586"} Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.713104 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.851859 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqkft\" (UniqueName: \"kubernetes.io/projected/10399bf7-0161-488c-8001-e6ba927889e5-kube-api-access-cqkft\") pod \"10399bf7-0161-488c-8001-e6ba927889e5\" (UID: \"10399bf7-0161-488c-8001-e6ba927889e5\") " Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.851902 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10399bf7-0161-488c-8001-e6ba927889e5-catalog-content\") pod \"10399bf7-0161-488c-8001-e6ba927889e5\" (UID: \"10399bf7-0161-488c-8001-e6ba927889e5\") " Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.851938 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10399bf7-0161-488c-8001-e6ba927889e5-utilities\") pod \"10399bf7-0161-488c-8001-e6ba927889e5\" (UID: \"10399bf7-0161-488c-8001-e6ba927889e5\") " Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.852689 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10399bf7-0161-488c-8001-e6ba927889e5-utilities" (OuterVolumeSpecName: "utilities") pod "10399bf7-0161-488c-8001-e6ba927889e5" (UID: "10399bf7-0161-488c-8001-e6ba927889e5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.878358 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10399bf7-0161-488c-8001-e6ba927889e5-kube-api-access-cqkft" (OuterVolumeSpecName: "kube-api-access-cqkft") pod "10399bf7-0161-488c-8001-e6ba927889e5" (UID: "10399bf7-0161-488c-8001-e6ba927889e5"). InnerVolumeSpecName "kube-api-access-cqkft". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.910181 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10399bf7-0161-488c-8001-e6ba927889e5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10399bf7-0161-488c-8001-e6ba927889e5" (UID: "10399bf7-0161-488c-8001-e6ba927889e5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.953614 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqkft\" (UniqueName: \"kubernetes.io/projected/10399bf7-0161-488c-8001-e6ba927889e5-kube-api-access-cqkft\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.953641 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10399bf7-0161-488c-8001-e6ba927889e5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:06 crc kubenswrapper[4751]: I0131 14:57:06.953650 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10399bf7-0161-488c-8001-e6ba927889e5-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:07 crc kubenswrapper[4751]: I0131 14:57:07.459573 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"19317a08-b18b-42c9-bdc9-394e1e06257d","Type":"ContainerStarted","Data":"07f687eb09cbc17ef2ede020cb3e1c35352131bf2222486a5b70524349e266f9"} Jan 31 14:57:07 crc kubenswrapper[4751]: I0131 14:57:07.459802 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:57:07 crc kubenswrapper[4751]: I0131 14:57:07.461683 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-jqv5m" event={"ID":"10399bf7-0161-488c-8001-e6ba927889e5","Type":"ContainerDied","Data":"1db0e98d5840943d47fedc71d5069e41ebcf9dcd1cc035ff41c419ba526b7bb7"} Jan 31 14:57:07 crc kubenswrapper[4751]: I0131 14:57:07.461743 4751 scope.go:117] "RemoveContainer" containerID="d6f182b4c85d9e139e51b5b38efb79b57d02af9a0357ca70a93a9631b5745566" Jan 31 14:57:07 crc kubenswrapper[4751]: I0131 14:57:07.461773 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-jqv5m" Jan 31 14:57:07 crc kubenswrapper[4751]: I0131 14:57:07.484755 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/rabbitmq-server-0" podStartSLOduration=36.722083716 podStartE2EDuration="45.484731496s" podCreationTimestamp="2026-01-31 14:56:22 +0000 UTC" firstStartedPulling="2026-01-31 14:56:24.659287145 +0000 UTC m=+889.034000040" lastFinishedPulling="2026-01-31 14:56:33.421934935 +0000 UTC m=+897.796647820" observedRunningTime="2026-01-31 14:57:07.477700771 +0000 UTC m=+931.852413666" watchObservedRunningTime="2026-01-31 14:57:07.484731496 +0000 UTC m=+931.859444401" Jan 31 14:57:07 crc kubenswrapper[4751]: I0131 14:57:07.496600 4751 scope.go:117] "RemoveContainer" containerID="9f513ef2d1f9802f83bbe5839191b499dfe0fb39587e4e549696c1f2a0b3b5c7" Jan 31 14:57:07 crc kubenswrapper[4751]: I0131 14:57:07.510996 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-jqv5m"] Jan 31 14:57:07 crc kubenswrapper[4751]: I0131 14:57:07.516866 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-jqv5m"] Jan 31 14:57:07 crc kubenswrapper[4751]: I0131 14:57:07.519104 4751 scope.go:117] "RemoveContainer" containerID="e8bb73c8fea313592a221df9e5dbff86d9e1c2c8a380afe92fecdcb99f557785" Jan 31 14:57:08 crc kubenswrapper[4751]: I0131 14:57:08.413980 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10399bf7-0161-488c-8001-e6ba927889e5" path="/var/lib/kubelet/pods/10399bf7-0161-488c-8001-e6ba927889e5/volumes" Jan 31 14:57:14 crc kubenswrapper[4751]: I0131 14:57:14.934119 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-index-vjs56"] Jan 31 14:57:14 crc kubenswrapper[4751]: E0131 14:57:14.935257 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10399bf7-0161-488c-8001-e6ba927889e5" containerName="registry-server" Jan 31 14:57:14 crc kubenswrapper[4751]: I0131 14:57:14.935290 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="10399bf7-0161-488c-8001-e6ba927889e5" containerName="registry-server" Jan 31 14:57:14 crc kubenswrapper[4751]: E0131 14:57:14.935324 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10399bf7-0161-488c-8001-e6ba927889e5" containerName="extract-content" Jan 31 14:57:14 crc kubenswrapper[4751]: I0131 14:57:14.935341 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="10399bf7-0161-488c-8001-e6ba927889e5" containerName="extract-content" Jan 31 14:57:14 crc kubenswrapper[4751]: E0131 14:57:14.935387 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf9b8e1-7d1f-4fe4-9111-c040d8842d40" containerName="registry-server" Jan 31 14:57:14 crc kubenswrapper[4751]: I0131 14:57:14.935405 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf9b8e1-7d1f-4fe4-9111-c040d8842d40" containerName="registry-server" Jan 31 14:57:14 crc kubenswrapper[4751]: E0131 14:57:14.935431 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10399bf7-0161-488c-8001-e6ba927889e5" containerName="extract-utilities" Jan 31 14:57:14 crc kubenswrapper[4751]: I0131 14:57:14.935449 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="10399bf7-0161-488c-8001-e6ba927889e5" containerName="extract-utilities" Jan 31 14:57:14 crc kubenswrapper[4751]: E0131 14:57:14.935467 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf9b8e1-7d1f-4fe4-9111-c040d8842d40" containerName="extract-content" Jan 31 14:57:14 crc kubenswrapper[4751]: I0131 14:57:14.935483 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf9b8e1-7d1f-4fe4-9111-c040d8842d40" containerName="extract-content" Jan 31 14:57:14 crc kubenswrapper[4751]: E0131 14:57:14.935512 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cbf9b8e1-7d1f-4fe4-9111-c040d8842d40" containerName="extract-utilities" Jan 31 14:57:14 crc kubenswrapper[4751]: I0131 14:57:14.935528 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbf9b8e1-7d1f-4fe4-9111-c040d8842d40" containerName="extract-utilities" Jan 31 14:57:14 crc kubenswrapper[4751]: I0131 14:57:14.935755 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="10399bf7-0161-488c-8001-e6ba927889e5" containerName="registry-server" Jan 31 14:57:14 crc kubenswrapper[4751]: I0131 14:57:14.935783 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="cbf9b8e1-7d1f-4fe4-9111-c040d8842d40" containerName="registry-server" Jan 31 14:57:14 crc kubenswrapper[4751]: I0131 14:57:14.936589 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-vjs56" Jan 31 14:57:14 crc kubenswrapper[4751]: I0131 14:57:14.941298 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-index-dockercfg-4qrpl" Jan 31 14:57:14 crc kubenswrapper[4751]: I0131 14:57:14.950640 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-vjs56"] Jan 31 14:57:15 crc kubenswrapper[4751]: I0131 14:57:15.071820 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vqp67\" (UniqueName: \"kubernetes.io/projected/95bedc09-cab6-4e6b-a210-8cb1f8b39601-kube-api-access-vqp67\") pod \"horizon-operator-index-vjs56\" (UID: \"95bedc09-cab6-4e6b-a210-8cb1f8b39601\") " pod="openstack-operators/horizon-operator-index-vjs56" Jan 31 14:57:15 crc kubenswrapper[4751]: I0131 14:57:15.173005 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vqp67\" (UniqueName: \"kubernetes.io/projected/95bedc09-cab6-4e6b-a210-8cb1f8b39601-kube-api-access-vqp67\") pod \"horizon-operator-index-vjs56\" (UID: \"95bedc09-cab6-4e6b-a210-8cb1f8b39601\") " pod="openstack-operators/horizon-operator-index-vjs56" Jan 31 14:57:15 crc kubenswrapper[4751]: I0131 14:57:15.207301 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vqp67\" (UniqueName: \"kubernetes.io/projected/95bedc09-cab6-4e6b-a210-8cb1f8b39601-kube-api-access-vqp67\") pod \"horizon-operator-index-vjs56\" (UID: \"95bedc09-cab6-4e6b-a210-8cb1f8b39601\") " pod="openstack-operators/horizon-operator-index-vjs56" Jan 31 14:57:15 crc kubenswrapper[4751]: I0131 14:57:15.270259 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-index-vjs56" Jan 31 14:57:15 crc kubenswrapper[4751]: I0131 14:57:15.779509 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-index-vjs56"] Jan 31 14:57:15 crc kubenswrapper[4751]: W0131 14:57:15.792742 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod95bedc09_cab6_4e6b_a210_8cb1f8b39601.slice/crio-480ede065aba964366fb7112751f9b1d88da7e06ab1cbbbb4f9f2d5f7ef6e632 WatchSource:0}: Error finding container 480ede065aba964366fb7112751f9b1d88da7e06ab1cbbbb4f9f2d5f7ef6e632: Status 404 returned error can't find the container with id 480ede065aba964366fb7112751f9b1d88da7e06ab1cbbbb4f9f2d5f7ef6e632 Jan 31 14:57:16 crc kubenswrapper[4751]: I0131 14:57:16.538984 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-vjs56" event={"ID":"95bedc09-cab6-4e6b-a210-8cb1f8b39601","Type":"ContainerStarted","Data":"480ede065aba964366fb7112751f9b1d88da7e06ab1cbbbb4f9f2d5f7ef6e632"} Jan 31 14:57:17 crc kubenswrapper[4751]: I0131 14:57:17.549224 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-index-vjs56" event={"ID":"95bedc09-cab6-4e6b-a210-8cb1f8b39601","Type":"ContainerStarted","Data":"4a2e12dfd21dff2f78267376159e9e47a8fbc75b43ecb1ce27474c3a534a3f4b"} Jan 31 14:57:17 crc kubenswrapper[4751]: I0131 14:57:17.577701 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-index-vjs56" podStartSLOduration=2.486879434 podStartE2EDuration="3.577676138s" podCreationTimestamp="2026-01-31 14:57:14 +0000 UTC" firstStartedPulling="2026-01-31 14:57:15.796728132 +0000 UTC m=+940.171441057" lastFinishedPulling="2026-01-31 14:57:16.887524876 +0000 UTC m=+941.262237761" observedRunningTime="2026-01-31 14:57:17.569540454 +0000 UTC m=+941.944253399" watchObservedRunningTime="2026-01-31 14:57:17.577676138 +0000 UTC m=+941.952389063" Jan 31 14:57:19 crc kubenswrapper[4751]: I0131 14:57:19.736168 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-index-75pvx"] Jan 31 14:57:19 crc kubenswrapper[4751]: I0131 14:57:19.738433 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-75pvx" Jan 31 14:57:19 crc kubenswrapper[4751]: I0131 14:57:19.741645 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-index-dockercfg-npvsh" Jan 31 14:57:19 crc kubenswrapper[4751]: I0131 14:57:19.747762 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-75pvx"] Jan 31 14:57:19 crc kubenswrapper[4751]: I0131 14:57:19.846134 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpg2k\" (UniqueName: \"kubernetes.io/projected/065b8624-7cdb-463c-9636-d3e980119eb7-kube-api-access-qpg2k\") pod \"swift-operator-index-75pvx\" (UID: \"065b8624-7cdb-463c-9636-d3e980119eb7\") " pod="openstack-operators/swift-operator-index-75pvx" Jan 31 14:57:19 crc kubenswrapper[4751]: I0131 14:57:19.947366 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpg2k\" (UniqueName: \"kubernetes.io/projected/065b8624-7cdb-463c-9636-d3e980119eb7-kube-api-access-qpg2k\") pod \"swift-operator-index-75pvx\" (UID: \"065b8624-7cdb-463c-9636-d3e980119eb7\") " pod="openstack-operators/swift-operator-index-75pvx" Jan 31 14:57:19 crc kubenswrapper[4751]: I0131 14:57:19.973049 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpg2k\" (UniqueName: \"kubernetes.io/projected/065b8624-7cdb-463c-9636-d3e980119eb7-kube-api-access-qpg2k\") pod \"swift-operator-index-75pvx\" (UID: \"065b8624-7cdb-463c-9636-d3e980119eb7\") " pod="openstack-operators/swift-operator-index-75pvx" Jan 31 14:57:20 crc kubenswrapper[4751]: I0131 14:57:20.108936 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-75pvx" Jan 31 14:57:20 crc kubenswrapper[4751]: I0131 14:57:20.588027 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-index-75pvx"] Jan 31 14:57:20 crc kubenswrapper[4751]: W0131 14:57:20.595846 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod065b8624_7cdb_463c_9636_d3e980119eb7.slice/crio-5f493f4e9467a7936b5f9e1ffc78338268f76a1484833cfafa1962d6944fc1c3 WatchSource:0}: Error finding container 5f493f4e9467a7936b5f9e1ffc78338268f76a1484833cfafa1962d6944fc1c3: Status 404 returned error can't find the container with id 5f493f4e9467a7936b5f9e1ffc78338268f76a1484833cfafa1962d6944fc1c3 Jan 31 14:57:21 crc kubenswrapper[4751]: I0131 14:57:21.579986 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-75pvx" event={"ID":"065b8624-7cdb-463c-9636-d3e980119eb7","Type":"ContainerStarted","Data":"5f493f4e9467a7936b5f9e1ffc78338268f76a1484833cfafa1962d6944fc1c3"} Jan 31 14:57:22 crc kubenswrapper[4751]: I0131 14:57:22.591971 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-75pvx" event={"ID":"065b8624-7cdb-463c-9636-d3e980119eb7","Type":"ContainerStarted","Data":"d60e188fde30ce119895fe465702862991673f5195ee276a966d98efdbbb7cf3"} Jan 31 14:57:22 crc kubenswrapper[4751]: I0131 14:57:22.613548 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-index-75pvx" podStartSLOduration=2.71122359 podStartE2EDuration="3.613529075s" podCreationTimestamp="2026-01-31 14:57:19 +0000 UTC" firstStartedPulling="2026-01-31 14:57:20.598819818 +0000 UTC m=+944.973532703" lastFinishedPulling="2026-01-31 14:57:21.501125303 +0000 UTC m=+945.875838188" observedRunningTime="2026-01-31 14:57:22.607295811 +0000 UTC m=+946.982008696" watchObservedRunningTime="2026-01-31 14:57:22.613529075 +0000 UTC m=+946.988241960" Jan 31 14:57:24 crc kubenswrapper[4751]: I0131 14:57:24.331367 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 14:57:25 crc kubenswrapper[4751]: I0131 14:57:25.271031 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/horizon-operator-index-vjs56" Jan 31 14:57:25 crc kubenswrapper[4751]: I0131 14:57:25.271341 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-index-vjs56" Jan 31 14:57:25 crc kubenswrapper[4751]: I0131 14:57:25.312692 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/horizon-operator-index-vjs56" Jan 31 14:57:25 crc kubenswrapper[4751]: I0131 14:57:25.633203 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-index-vjs56" Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.659951 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-db-create-pl5bs"] Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.661233 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-pl5bs" Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.665851 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd"] Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.667040 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd" Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.671408 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-db-secret" Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.676850 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-create-pl5bs"] Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.680179 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd"] Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.770847 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06a47516-5cf6-431b-86ee-7732bd88fed4-operator-scripts\") pod \"keystone-dfde-account-create-update-vbcnd\" (UID: \"06a47516-5cf6-431b-86ee-7732bd88fed4\") " pod="glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd" Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.771462 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/568d26c9-1fe8-4e01-a7c0-cbe91951fe60-operator-scripts\") pod \"keystone-db-create-pl5bs\" (UID: \"568d26c9-1fe8-4e01-a7c0-cbe91951fe60\") " pod="glance-kuttl-tests/keystone-db-create-pl5bs" Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.772131 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbsp6\" (UniqueName: \"kubernetes.io/projected/568d26c9-1fe8-4e01-a7c0-cbe91951fe60-kube-api-access-mbsp6\") pod \"keystone-db-create-pl5bs\" (UID: \"568d26c9-1fe8-4e01-a7c0-cbe91951fe60\") " pod="glance-kuttl-tests/keystone-db-create-pl5bs" Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.772214 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xmt5z\" (UniqueName: \"kubernetes.io/projected/06a47516-5cf6-431b-86ee-7732bd88fed4-kube-api-access-xmt5z\") pod \"keystone-dfde-account-create-update-vbcnd\" (UID: \"06a47516-5cf6-431b-86ee-7732bd88fed4\") " pod="glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd" Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.873149 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mbsp6\" (UniqueName: \"kubernetes.io/projected/568d26c9-1fe8-4e01-a7c0-cbe91951fe60-kube-api-access-mbsp6\") pod \"keystone-db-create-pl5bs\" (UID: \"568d26c9-1fe8-4e01-a7c0-cbe91951fe60\") " pod="glance-kuttl-tests/keystone-db-create-pl5bs" Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.873202 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xmt5z\" (UniqueName: \"kubernetes.io/projected/06a47516-5cf6-431b-86ee-7732bd88fed4-kube-api-access-xmt5z\") pod \"keystone-dfde-account-create-update-vbcnd\" (UID: \"06a47516-5cf6-431b-86ee-7732bd88fed4\") " pod="glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd" Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.873246 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06a47516-5cf6-431b-86ee-7732bd88fed4-operator-scripts\") pod \"keystone-dfde-account-create-update-vbcnd\" (UID: \"06a47516-5cf6-431b-86ee-7732bd88fed4\") " pod="glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd" Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.873318 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/568d26c9-1fe8-4e01-a7c0-cbe91951fe60-operator-scripts\") pod \"keystone-db-create-pl5bs\" (UID: \"568d26c9-1fe8-4e01-a7c0-cbe91951fe60\") " pod="glance-kuttl-tests/keystone-db-create-pl5bs" Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.874259 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06a47516-5cf6-431b-86ee-7732bd88fed4-operator-scripts\") pod \"keystone-dfde-account-create-update-vbcnd\" (UID: \"06a47516-5cf6-431b-86ee-7732bd88fed4\") " pod="glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd" Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.874272 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/568d26c9-1fe8-4e01-a7c0-cbe91951fe60-operator-scripts\") pod \"keystone-db-create-pl5bs\" (UID: \"568d26c9-1fe8-4e01-a7c0-cbe91951fe60\") " pod="glance-kuttl-tests/keystone-db-create-pl5bs" Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.892016 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xmt5z\" (UniqueName: \"kubernetes.io/projected/06a47516-5cf6-431b-86ee-7732bd88fed4-kube-api-access-xmt5z\") pod \"keystone-dfde-account-create-update-vbcnd\" (UID: \"06a47516-5cf6-431b-86ee-7732bd88fed4\") " pod="glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd" Jan 31 14:57:28 crc kubenswrapper[4751]: I0131 14:57:28.892060 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbsp6\" (UniqueName: \"kubernetes.io/projected/568d26c9-1fe8-4e01-a7c0-cbe91951fe60-kube-api-access-mbsp6\") pod \"keystone-db-create-pl5bs\" (UID: \"568d26c9-1fe8-4e01-a7c0-cbe91951fe60\") " pod="glance-kuttl-tests/keystone-db-create-pl5bs" Jan 31 14:57:29 crc kubenswrapper[4751]: I0131 14:57:29.046324 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd" Jan 31 14:57:29 crc kubenswrapper[4751]: I0131 14:57:29.047696 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-pl5bs" Jan 31 14:57:29 crc kubenswrapper[4751]: I0131 14:57:29.520235 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd"] Jan 31 14:57:29 crc kubenswrapper[4751]: I0131 14:57:29.570848 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-create-pl5bs"] Jan 31 14:57:29 crc kubenswrapper[4751]: W0131 14:57:29.603189 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod568d26c9_1fe8_4e01_a7c0_cbe91951fe60.slice/crio-bd00717c7915ab2ac4085f5f187393778b80d29e2118670743465637f5e79db6 WatchSource:0}: Error finding container bd00717c7915ab2ac4085f5f187393778b80d29e2118670743465637f5e79db6: Status 404 returned error can't find the container with id bd00717c7915ab2ac4085f5f187393778b80d29e2118670743465637f5e79db6 Jan 31 14:57:29 crc kubenswrapper[4751]: I0131 14:57:29.644800 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-pl5bs" event={"ID":"568d26c9-1fe8-4e01-a7c0-cbe91951fe60","Type":"ContainerStarted","Data":"bd00717c7915ab2ac4085f5f187393778b80d29e2118670743465637f5e79db6"} Jan 31 14:57:29 crc kubenswrapper[4751]: I0131 14:57:29.646601 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd" event={"ID":"06a47516-5cf6-431b-86ee-7732bd88fed4","Type":"ContainerStarted","Data":"4717c7f2329c1c3fcafc8e5559236099ccd5c56439f278f17fdefcf2a479b42c"} Jan 31 14:57:30 crc kubenswrapper[4751]: I0131 14:57:30.109617 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/swift-operator-index-75pvx" Jan 31 14:57:30 crc kubenswrapper[4751]: I0131 14:57:30.109966 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-index-75pvx" Jan 31 14:57:30 crc kubenswrapper[4751]: I0131 14:57:30.149028 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/swift-operator-index-75pvx" Jan 31 14:57:30 crc kubenswrapper[4751]: I0131 14:57:30.655899 4751 generic.go:334] "Generic (PLEG): container finished" podID="568d26c9-1fe8-4e01-a7c0-cbe91951fe60" containerID="457ea80a5f749ea606e6892b07ad8e22c7b832800f0f223bc54849035a17270d" exitCode=0 Jan 31 14:57:30 crc kubenswrapper[4751]: I0131 14:57:30.655951 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-pl5bs" event={"ID":"568d26c9-1fe8-4e01-a7c0-cbe91951fe60","Type":"ContainerDied","Data":"457ea80a5f749ea606e6892b07ad8e22c7b832800f0f223bc54849035a17270d"} Jan 31 14:57:30 crc kubenswrapper[4751]: I0131 14:57:30.657896 4751 generic.go:334] "Generic (PLEG): container finished" podID="06a47516-5cf6-431b-86ee-7732bd88fed4" containerID="eec75dcec16927bdd78c685c8995e59bfecf459a9739faabb410481b5046b1fb" exitCode=0 Jan 31 14:57:30 crc kubenswrapper[4751]: I0131 14:57:30.657935 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd" event={"ID":"06a47516-5cf6-431b-86ee-7732bd88fed4","Type":"ContainerDied","Data":"eec75dcec16927bdd78c685c8995e59bfecf459a9739faabb410481b5046b1fb"} Jan 31 14:57:30 crc kubenswrapper[4751]: I0131 14:57:30.690336 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-index-75pvx" Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.053914 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd" Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.062294 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-pl5bs" Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.122322 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06a47516-5cf6-431b-86ee-7732bd88fed4-operator-scripts\") pod \"06a47516-5cf6-431b-86ee-7732bd88fed4\" (UID: \"06a47516-5cf6-431b-86ee-7732bd88fed4\") " Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.122369 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbsp6\" (UniqueName: \"kubernetes.io/projected/568d26c9-1fe8-4e01-a7c0-cbe91951fe60-kube-api-access-mbsp6\") pod \"568d26c9-1fe8-4e01-a7c0-cbe91951fe60\" (UID: \"568d26c9-1fe8-4e01-a7c0-cbe91951fe60\") " Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.122407 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/568d26c9-1fe8-4e01-a7c0-cbe91951fe60-operator-scripts\") pod \"568d26c9-1fe8-4e01-a7c0-cbe91951fe60\" (UID: \"568d26c9-1fe8-4e01-a7c0-cbe91951fe60\") " Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.122446 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xmt5z\" (UniqueName: \"kubernetes.io/projected/06a47516-5cf6-431b-86ee-7732bd88fed4-kube-api-access-xmt5z\") pod \"06a47516-5cf6-431b-86ee-7732bd88fed4\" (UID: \"06a47516-5cf6-431b-86ee-7732bd88fed4\") " Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.123581 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/568d26c9-1fe8-4e01-a7c0-cbe91951fe60-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "568d26c9-1fe8-4e01-a7c0-cbe91951fe60" (UID: "568d26c9-1fe8-4e01-a7c0-cbe91951fe60"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.124625 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/06a47516-5cf6-431b-86ee-7732bd88fed4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "06a47516-5cf6-431b-86ee-7732bd88fed4" (UID: "06a47516-5cf6-431b-86ee-7732bd88fed4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.130453 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/568d26c9-1fe8-4e01-a7c0-cbe91951fe60-kube-api-access-mbsp6" (OuterVolumeSpecName: "kube-api-access-mbsp6") pod "568d26c9-1fe8-4e01-a7c0-cbe91951fe60" (UID: "568d26c9-1fe8-4e01-a7c0-cbe91951fe60"). InnerVolumeSpecName "kube-api-access-mbsp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.132370 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/06a47516-5cf6-431b-86ee-7732bd88fed4-kube-api-access-xmt5z" (OuterVolumeSpecName: "kube-api-access-xmt5z") pod "06a47516-5cf6-431b-86ee-7732bd88fed4" (UID: "06a47516-5cf6-431b-86ee-7732bd88fed4"). InnerVolumeSpecName "kube-api-access-xmt5z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.224126 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/06a47516-5cf6-431b-86ee-7732bd88fed4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.224205 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mbsp6\" (UniqueName: \"kubernetes.io/projected/568d26c9-1fe8-4e01-a7c0-cbe91951fe60-kube-api-access-mbsp6\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.224219 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/568d26c9-1fe8-4e01-a7c0-cbe91951fe60-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.224232 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xmt5z\" (UniqueName: \"kubernetes.io/projected/06a47516-5cf6-431b-86ee-7732bd88fed4-kube-api-access-xmt5z\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.676559 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd" event={"ID":"06a47516-5cf6-431b-86ee-7732bd88fed4","Type":"ContainerDied","Data":"4717c7f2329c1c3fcafc8e5559236099ccd5c56439f278f17fdefcf2a479b42c"} Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.676632 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd" Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.676636 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4717c7f2329c1c3fcafc8e5559236099ccd5c56439f278f17fdefcf2a479b42c" Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.681160 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-create-pl5bs" event={"ID":"568d26c9-1fe8-4e01-a7c0-cbe91951fe60","Type":"ContainerDied","Data":"bd00717c7915ab2ac4085f5f187393778b80d29e2118670743465637f5e79db6"} Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.681443 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd00717c7915ab2ac4085f5f187393778b80d29e2118670743465637f5e79db6" Jan 31 14:57:32 crc kubenswrapper[4751]: I0131 14:57:32.681213 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-create-pl5bs" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.237106 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-db-sync-56bwv"] Jan 31 14:57:34 crc kubenswrapper[4751]: E0131 14:57:34.238537 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="568d26c9-1fe8-4e01-a7c0-cbe91951fe60" containerName="mariadb-database-create" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.238630 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="568d26c9-1fe8-4e01-a7c0-cbe91951fe60" containerName="mariadb-database-create" Jan 31 14:57:34 crc kubenswrapper[4751]: E0131 14:57:34.238723 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="06a47516-5cf6-431b-86ee-7732bd88fed4" containerName="mariadb-account-create-update" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.238797 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="06a47516-5cf6-431b-86ee-7732bd88fed4" containerName="mariadb-account-create-update" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.238996 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="06a47516-5cf6-431b-86ee-7732bd88fed4" containerName="mariadb-account-create-update" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.239149 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="568d26c9-1fe8-4e01-a7c0-cbe91951fe60" containerName="mariadb-database-create" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.239781 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-56bwv" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.243051 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.243291 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.243521 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-szsvc" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.245650 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.252607 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-56bwv"] Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.355668 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hq52\" (UniqueName: \"kubernetes.io/projected/ff5e8bad-e481-445e-99e8-5a5487e908d8-kube-api-access-7hq52\") pod \"keystone-db-sync-56bwv\" (UID: \"ff5e8bad-e481-445e-99e8-5a5487e908d8\") " pod="glance-kuttl-tests/keystone-db-sync-56bwv" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.355935 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff5e8bad-e481-445e-99e8-5a5487e908d8-config-data\") pod \"keystone-db-sync-56bwv\" (UID: \"ff5e8bad-e481-445e-99e8-5a5487e908d8\") " pod="glance-kuttl-tests/keystone-db-sync-56bwv" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.457863 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hq52\" (UniqueName: \"kubernetes.io/projected/ff5e8bad-e481-445e-99e8-5a5487e908d8-kube-api-access-7hq52\") pod \"keystone-db-sync-56bwv\" (UID: \"ff5e8bad-e481-445e-99e8-5a5487e908d8\") " pod="glance-kuttl-tests/keystone-db-sync-56bwv" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.458014 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff5e8bad-e481-445e-99e8-5a5487e908d8-config-data\") pod \"keystone-db-sync-56bwv\" (UID: \"ff5e8bad-e481-445e-99e8-5a5487e908d8\") " pod="glance-kuttl-tests/keystone-db-sync-56bwv" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.468279 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff5e8bad-e481-445e-99e8-5a5487e908d8-config-data\") pod \"keystone-db-sync-56bwv\" (UID: \"ff5e8bad-e481-445e-99e8-5a5487e908d8\") " pod="glance-kuttl-tests/keystone-db-sync-56bwv" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.488000 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hq52\" (UniqueName: \"kubernetes.io/projected/ff5e8bad-e481-445e-99e8-5a5487e908d8-kube-api-access-7hq52\") pod \"keystone-db-sync-56bwv\" (UID: \"ff5e8bad-e481-445e-99e8-5a5487e908d8\") " pod="glance-kuttl-tests/keystone-db-sync-56bwv" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.555635 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-56bwv" Jan 31 14:57:34 crc kubenswrapper[4751]: I0131 14:57:34.784563 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-56bwv"] Jan 31 14:57:35 crc kubenswrapper[4751]: I0131 14:57:35.703219 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-56bwv" event={"ID":"ff5e8bad-e481-445e-99e8-5a5487e908d8","Type":"ContainerStarted","Data":"ecc8a96f4fea25c974208298436bf510666bd64b0748730b17c3c0f483b01a86"} Jan 31 14:57:36 crc kubenswrapper[4751]: I0131 14:57:36.558210 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq"] Jan 31 14:57:36 crc kubenswrapper[4751]: I0131 14:57:36.560192 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" Jan 31 14:57:36 crc kubenswrapper[4751]: I0131 14:57:36.566183 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wxkjx" Jan 31 14:57:36 crc kubenswrapper[4751]: I0131 14:57:36.566809 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq"] Jan 31 14:57:36 crc kubenswrapper[4751]: I0131 14:57:36.692869 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnpjj\" (UniqueName: \"kubernetes.io/projected/886303a3-d05b-4551-bd03-ebc2e2aef77c-kube-api-access-rnpjj\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq\" (UID: \"886303a3-d05b-4551-bd03-ebc2e2aef77c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" Jan 31 14:57:36 crc kubenswrapper[4751]: I0131 14:57:36.692948 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/886303a3-d05b-4551-bd03-ebc2e2aef77c-bundle\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq\" (UID: \"886303a3-d05b-4551-bd03-ebc2e2aef77c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" Jan 31 14:57:36 crc kubenswrapper[4751]: I0131 14:57:36.692969 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/886303a3-d05b-4551-bd03-ebc2e2aef77c-util\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq\" (UID: \"886303a3-d05b-4551-bd03-ebc2e2aef77c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" Jan 31 14:57:36 crc kubenswrapper[4751]: I0131 14:57:36.793929 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnpjj\" (UniqueName: \"kubernetes.io/projected/886303a3-d05b-4551-bd03-ebc2e2aef77c-kube-api-access-rnpjj\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq\" (UID: \"886303a3-d05b-4551-bd03-ebc2e2aef77c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" Jan 31 14:57:36 crc kubenswrapper[4751]: I0131 14:57:36.793997 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/886303a3-d05b-4551-bd03-ebc2e2aef77c-bundle\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq\" (UID: \"886303a3-d05b-4551-bd03-ebc2e2aef77c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" Jan 31 14:57:36 crc kubenswrapper[4751]: I0131 14:57:36.794020 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/886303a3-d05b-4551-bd03-ebc2e2aef77c-util\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq\" (UID: \"886303a3-d05b-4551-bd03-ebc2e2aef77c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" Jan 31 14:57:36 crc kubenswrapper[4751]: I0131 14:57:36.794498 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/886303a3-d05b-4551-bd03-ebc2e2aef77c-util\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq\" (UID: \"886303a3-d05b-4551-bd03-ebc2e2aef77c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" Jan 31 14:57:36 crc kubenswrapper[4751]: I0131 14:57:36.795036 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/886303a3-d05b-4551-bd03-ebc2e2aef77c-bundle\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq\" (UID: \"886303a3-d05b-4551-bd03-ebc2e2aef77c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" Jan 31 14:57:36 crc kubenswrapper[4751]: I0131 14:57:36.816956 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnpjj\" (UniqueName: \"kubernetes.io/projected/886303a3-d05b-4551-bd03-ebc2e2aef77c-kube-api-access-rnpjj\") pod \"70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq\" (UID: \"886303a3-d05b-4551-bd03-ebc2e2aef77c\") " pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" Jan 31 14:57:36 crc kubenswrapper[4751]: I0131 14:57:36.892510 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.382762 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq"] Jan 31 14:57:37 crc kubenswrapper[4751]: W0131 14:57:37.396191 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod886303a3_d05b_4551_bd03_ebc2e2aef77c.slice/crio-afefcd8155138e543666c28dd288356aaa1ab98ddb01c246f7d57e4269dc77ee WatchSource:0}: Error finding container afefcd8155138e543666c28dd288356aaa1ab98ddb01c246f7d57e4269dc77ee: Status 404 returned error can't find the container with id afefcd8155138e543666c28dd288356aaa1ab98ddb01c246f7d57e4269dc77ee Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.545550 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f"] Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.546930 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.559342 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f"] Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.605049 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eec59a88-8f4d-4482-aa2a-11a508cc3a79-util\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f\" (UID: \"eec59a88-8f4d-4482-aa2a-11a508cc3a79\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.605252 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eec59a88-8f4d-4482-aa2a-11a508cc3a79-bundle\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f\" (UID: \"eec59a88-8f4d-4482-aa2a-11a508cc3a79\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.605308 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gs89j\" (UniqueName: \"kubernetes.io/projected/eec59a88-8f4d-4482-aa2a-11a508cc3a79-kube-api-access-gs89j\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f\" (UID: \"eec59a88-8f4d-4482-aa2a-11a508cc3a79\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.706296 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eec59a88-8f4d-4482-aa2a-11a508cc3a79-bundle\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f\" (UID: \"eec59a88-8f4d-4482-aa2a-11a508cc3a79\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.706347 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gs89j\" (UniqueName: \"kubernetes.io/projected/eec59a88-8f4d-4482-aa2a-11a508cc3a79-kube-api-access-gs89j\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f\" (UID: \"eec59a88-8f4d-4482-aa2a-11a508cc3a79\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.706389 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eec59a88-8f4d-4482-aa2a-11a508cc3a79-util\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f\" (UID: \"eec59a88-8f4d-4482-aa2a-11a508cc3a79\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.706761 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eec59a88-8f4d-4482-aa2a-11a508cc3a79-util\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f\" (UID: \"eec59a88-8f4d-4482-aa2a-11a508cc3a79\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.706841 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eec59a88-8f4d-4482-aa2a-11a508cc3a79-bundle\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f\" (UID: \"eec59a88-8f4d-4482-aa2a-11a508cc3a79\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.719221 4751 generic.go:334] "Generic (PLEG): container finished" podID="886303a3-d05b-4551-bd03-ebc2e2aef77c" containerID="ac644719d568c7b156ce9cbb766a2f8c70e69f2f94ca1bad0488a7736c5cd6c9" exitCode=0 Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.719266 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" event={"ID":"886303a3-d05b-4551-bd03-ebc2e2aef77c","Type":"ContainerDied","Data":"ac644719d568c7b156ce9cbb766a2f8c70e69f2f94ca1bad0488a7736c5cd6c9"} Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.719291 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" event={"ID":"886303a3-d05b-4551-bd03-ebc2e2aef77c","Type":"ContainerStarted","Data":"afefcd8155138e543666c28dd288356aaa1ab98ddb01c246f7d57e4269dc77ee"} Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.727005 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gs89j\" (UniqueName: \"kubernetes.io/projected/eec59a88-8f4d-4482-aa2a-11a508cc3a79-kube-api-access-gs89j\") pod \"920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f\" (UID: \"eec59a88-8f4d-4482-aa2a-11a508cc3a79\") " pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" Jan 31 14:57:37 crc kubenswrapper[4751]: I0131 14:57:37.881858 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" Jan 31 14:57:45 crc kubenswrapper[4751]: I0131 14:57:45.782435 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-56bwv" event={"ID":"ff5e8bad-e481-445e-99e8-5a5487e908d8","Type":"ContainerStarted","Data":"5d457b880e70ab7d7bdcd88eb562c916f03e6b62d577ebf9192cc4974cd177f7"} Jan 31 14:57:45 crc kubenswrapper[4751]: I0131 14:57:45.786353 4751 generic.go:334] "Generic (PLEG): container finished" podID="886303a3-d05b-4551-bd03-ebc2e2aef77c" containerID="1985ee06fa1b0e5b47503229ec369a787fff12bff875d4cad0ea6a84e35d2169" exitCode=0 Jan 31 14:57:45 crc kubenswrapper[4751]: I0131 14:57:45.786401 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" event={"ID":"886303a3-d05b-4551-bd03-ebc2e2aef77c","Type":"ContainerDied","Data":"1985ee06fa1b0e5b47503229ec369a787fff12bff875d4cad0ea6a84e35d2169"} Jan 31 14:57:45 crc kubenswrapper[4751]: I0131 14:57:45.803185 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-db-sync-56bwv" podStartSLOduration=1.105160552 podStartE2EDuration="11.803169889s" podCreationTimestamp="2026-01-31 14:57:34 +0000 UTC" firstStartedPulling="2026-01-31 14:57:34.796032357 +0000 UTC m=+959.170745252" lastFinishedPulling="2026-01-31 14:57:45.494041684 +0000 UTC m=+969.868754589" observedRunningTime="2026-01-31 14:57:45.801412813 +0000 UTC m=+970.176125698" watchObservedRunningTime="2026-01-31 14:57:45.803169889 +0000 UTC m=+970.177882774" Jan 31 14:57:45 crc kubenswrapper[4751]: W0131 14:57:45.896280 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeec59a88_8f4d_4482_aa2a_11a508cc3a79.slice/crio-a5c216ff73ca50f7f26c6f01ee85cb8e63a571945ededa582f156f42cc594b11 WatchSource:0}: Error finding container a5c216ff73ca50f7f26c6f01ee85cb8e63a571945ededa582f156f42cc594b11: Status 404 returned error can't find the container with id a5c216ff73ca50f7f26c6f01ee85cb8e63a571945ededa582f156f42cc594b11 Jan 31 14:57:45 crc kubenswrapper[4751]: I0131 14:57:45.897500 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f"] Jan 31 14:57:46 crc kubenswrapper[4751]: I0131 14:57:46.807938 4751 generic.go:334] "Generic (PLEG): container finished" podID="eec59a88-8f4d-4482-aa2a-11a508cc3a79" containerID="f300eeacb21ac9e2fbb188d6628873c007703196505315fe182c22ec9d5b15ea" exitCode=0 Jan 31 14:57:46 crc kubenswrapper[4751]: I0131 14:57:46.808005 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" event={"ID":"eec59a88-8f4d-4482-aa2a-11a508cc3a79","Type":"ContainerDied","Data":"f300eeacb21ac9e2fbb188d6628873c007703196505315fe182c22ec9d5b15ea"} Jan 31 14:57:46 crc kubenswrapper[4751]: I0131 14:57:46.808579 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" event={"ID":"eec59a88-8f4d-4482-aa2a-11a508cc3a79","Type":"ContainerStarted","Data":"a5c216ff73ca50f7f26c6f01ee85cb8e63a571945ededa582f156f42cc594b11"} Jan 31 14:57:46 crc kubenswrapper[4751]: I0131 14:57:46.811560 4751 generic.go:334] "Generic (PLEG): container finished" podID="886303a3-d05b-4551-bd03-ebc2e2aef77c" containerID="48586bec329cecb88f31df9f626d414b524092e8f0898f91d2fb0a6740d113ca" exitCode=0 Jan 31 14:57:46 crc kubenswrapper[4751]: I0131 14:57:46.811922 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" event={"ID":"886303a3-d05b-4551-bd03-ebc2e2aef77c","Type":"ContainerDied","Data":"48586bec329cecb88f31df9f626d414b524092e8f0898f91d2fb0a6740d113ca"} Jan 31 14:57:47 crc kubenswrapper[4751]: I0131 14:57:47.819873 4751 generic.go:334] "Generic (PLEG): container finished" podID="eec59a88-8f4d-4482-aa2a-11a508cc3a79" containerID="e4090a5a7ccdd198b52342527d7dfe4217aa94455211ced22fbdbfbfcf820855" exitCode=0 Jan 31 14:57:47 crc kubenswrapper[4751]: I0131 14:57:47.819964 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" event={"ID":"eec59a88-8f4d-4482-aa2a-11a508cc3a79","Type":"ContainerDied","Data":"e4090a5a7ccdd198b52342527d7dfe4217aa94455211ced22fbdbfbfcf820855"} Jan 31 14:57:48 crc kubenswrapper[4751]: I0131 14:57:48.138252 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" Jan 31 14:57:48 crc kubenswrapper[4751]: I0131 14:57:48.165554 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/886303a3-d05b-4551-bd03-ebc2e2aef77c-bundle\") pod \"886303a3-d05b-4551-bd03-ebc2e2aef77c\" (UID: \"886303a3-d05b-4551-bd03-ebc2e2aef77c\") " Jan 31 14:57:48 crc kubenswrapper[4751]: I0131 14:57:48.165615 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/886303a3-d05b-4551-bd03-ebc2e2aef77c-util\") pod \"886303a3-d05b-4551-bd03-ebc2e2aef77c\" (UID: \"886303a3-d05b-4551-bd03-ebc2e2aef77c\") " Jan 31 14:57:48 crc kubenswrapper[4751]: I0131 14:57:48.165684 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnpjj\" (UniqueName: \"kubernetes.io/projected/886303a3-d05b-4551-bd03-ebc2e2aef77c-kube-api-access-rnpjj\") pod \"886303a3-d05b-4551-bd03-ebc2e2aef77c\" (UID: \"886303a3-d05b-4551-bd03-ebc2e2aef77c\") " Jan 31 14:57:48 crc kubenswrapper[4751]: I0131 14:57:48.167080 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886303a3-d05b-4551-bd03-ebc2e2aef77c-bundle" (OuterVolumeSpecName: "bundle") pod "886303a3-d05b-4551-bd03-ebc2e2aef77c" (UID: "886303a3-d05b-4551-bd03-ebc2e2aef77c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:57:48 crc kubenswrapper[4751]: I0131 14:57:48.171622 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/886303a3-d05b-4551-bd03-ebc2e2aef77c-kube-api-access-rnpjj" (OuterVolumeSpecName: "kube-api-access-rnpjj") pod "886303a3-d05b-4551-bd03-ebc2e2aef77c" (UID: "886303a3-d05b-4551-bd03-ebc2e2aef77c"). InnerVolumeSpecName "kube-api-access-rnpjj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:48 crc kubenswrapper[4751]: I0131 14:57:48.175984 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/886303a3-d05b-4551-bd03-ebc2e2aef77c-util" (OuterVolumeSpecName: "util") pod "886303a3-d05b-4551-bd03-ebc2e2aef77c" (UID: "886303a3-d05b-4551-bd03-ebc2e2aef77c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:57:48 crc kubenswrapper[4751]: I0131 14:57:48.267862 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/886303a3-d05b-4551-bd03-ebc2e2aef77c-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:48 crc kubenswrapper[4751]: I0131 14:57:48.267906 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/886303a3-d05b-4551-bd03-ebc2e2aef77c-util\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:48 crc kubenswrapper[4751]: I0131 14:57:48.267918 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnpjj\" (UniqueName: \"kubernetes.io/projected/886303a3-d05b-4551-bd03-ebc2e2aef77c-kube-api-access-rnpjj\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:48 crc kubenswrapper[4751]: I0131 14:57:48.828462 4751 generic.go:334] "Generic (PLEG): container finished" podID="eec59a88-8f4d-4482-aa2a-11a508cc3a79" containerID="b51b06f6e48ff45871305e4af164ad58191ccac9b7a5f2b2bea9c0fdbdc14454" exitCode=0 Jan 31 14:57:48 crc kubenswrapper[4751]: I0131 14:57:48.828585 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" event={"ID":"eec59a88-8f4d-4482-aa2a-11a508cc3a79","Type":"ContainerDied","Data":"b51b06f6e48ff45871305e4af164ad58191ccac9b7a5f2b2bea9c0fdbdc14454"} Jan 31 14:57:48 crc kubenswrapper[4751]: I0131 14:57:48.831377 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" event={"ID":"886303a3-d05b-4551-bd03-ebc2e2aef77c","Type":"ContainerDied","Data":"afefcd8155138e543666c28dd288356aaa1ab98ddb01c246f7d57e4269dc77ee"} Jan 31 14:57:48 crc kubenswrapper[4751]: I0131 14:57:48.831409 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afefcd8155138e543666c28dd288356aaa1ab98ddb01c246f7d57e4269dc77ee" Jan 31 14:57:48 crc kubenswrapper[4751]: I0131 14:57:48.831416 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq" Jan 31 14:57:49 crc kubenswrapper[4751]: I0131 14:57:49.838362 4751 generic.go:334] "Generic (PLEG): container finished" podID="ff5e8bad-e481-445e-99e8-5a5487e908d8" containerID="5d457b880e70ab7d7bdcd88eb562c916f03e6b62d577ebf9192cc4974cd177f7" exitCode=0 Jan 31 14:57:49 crc kubenswrapper[4751]: I0131 14:57:49.838446 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-56bwv" event={"ID":"ff5e8bad-e481-445e-99e8-5a5487e908d8","Type":"ContainerDied","Data":"5d457b880e70ab7d7bdcd88eb562c916f03e6b62d577ebf9192cc4974cd177f7"} Jan 31 14:57:50 crc kubenswrapper[4751]: I0131 14:57:50.142172 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" Jan 31 14:57:50 crc kubenswrapper[4751]: I0131 14:57:50.192318 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gs89j\" (UniqueName: \"kubernetes.io/projected/eec59a88-8f4d-4482-aa2a-11a508cc3a79-kube-api-access-gs89j\") pod \"eec59a88-8f4d-4482-aa2a-11a508cc3a79\" (UID: \"eec59a88-8f4d-4482-aa2a-11a508cc3a79\") " Jan 31 14:57:50 crc kubenswrapper[4751]: I0131 14:57:50.192471 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eec59a88-8f4d-4482-aa2a-11a508cc3a79-bundle\") pod \"eec59a88-8f4d-4482-aa2a-11a508cc3a79\" (UID: \"eec59a88-8f4d-4482-aa2a-11a508cc3a79\") " Jan 31 14:57:50 crc kubenswrapper[4751]: I0131 14:57:50.193185 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eec59a88-8f4d-4482-aa2a-11a508cc3a79-bundle" (OuterVolumeSpecName: "bundle") pod "eec59a88-8f4d-4482-aa2a-11a508cc3a79" (UID: "eec59a88-8f4d-4482-aa2a-11a508cc3a79"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:57:50 crc kubenswrapper[4751]: I0131 14:57:50.193814 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eec59a88-8f4d-4482-aa2a-11a508cc3a79-util\") pod \"eec59a88-8f4d-4482-aa2a-11a508cc3a79\" (UID: \"eec59a88-8f4d-4482-aa2a-11a508cc3a79\") " Jan 31 14:57:50 crc kubenswrapper[4751]: I0131 14:57:50.194438 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/eec59a88-8f4d-4482-aa2a-11a508cc3a79-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:50 crc kubenswrapper[4751]: I0131 14:57:50.214896 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eec59a88-8f4d-4482-aa2a-11a508cc3a79-kube-api-access-gs89j" (OuterVolumeSpecName: "kube-api-access-gs89j") pod "eec59a88-8f4d-4482-aa2a-11a508cc3a79" (UID: "eec59a88-8f4d-4482-aa2a-11a508cc3a79"). InnerVolumeSpecName "kube-api-access-gs89j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:50 crc kubenswrapper[4751]: I0131 14:57:50.220549 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/eec59a88-8f4d-4482-aa2a-11a508cc3a79-util" (OuterVolumeSpecName: "util") pod "eec59a88-8f4d-4482-aa2a-11a508cc3a79" (UID: "eec59a88-8f4d-4482-aa2a-11a508cc3a79"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:57:50 crc kubenswrapper[4751]: I0131 14:57:50.295211 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/eec59a88-8f4d-4482-aa2a-11a508cc3a79-util\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:50 crc kubenswrapper[4751]: I0131 14:57:50.295239 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gs89j\" (UniqueName: \"kubernetes.io/projected/eec59a88-8f4d-4482-aa2a-11a508cc3a79-kube-api-access-gs89j\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:50 crc kubenswrapper[4751]: I0131 14:57:50.847773 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" event={"ID":"eec59a88-8f4d-4482-aa2a-11a508cc3a79","Type":"ContainerDied","Data":"a5c216ff73ca50f7f26c6f01ee85cb8e63a571945ededa582f156f42cc594b11"} Jan 31 14:57:50 crc kubenswrapper[4751]: I0131 14:57:50.848182 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a5c216ff73ca50f7f26c6f01ee85cb8e63a571945ededa582f156f42cc594b11" Jan 31 14:57:50 crc kubenswrapper[4751]: I0131 14:57:50.847790 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f" Jan 31 14:57:51 crc kubenswrapper[4751]: I0131 14:57:51.169889 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-56bwv" Jan 31 14:57:51 crc kubenswrapper[4751]: I0131 14:57:51.205932 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff5e8bad-e481-445e-99e8-5a5487e908d8-config-data\") pod \"ff5e8bad-e481-445e-99e8-5a5487e908d8\" (UID: \"ff5e8bad-e481-445e-99e8-5a5487e908d8\") " Jan 31 14:57:51 crc kubenswrapper[4751]: I0131 14:57:51.206108 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hq52\" (UniqueName: \"kubernetes.io/projected/ff5e8bad-e481-445e-99e8-5a5487e908d8-kube-api-access-7hq52\") pod \"ff5e8bad-e481-445e-99e8-5a5487e908d8\" (UID: \"ff5e8bad-e481-445e-99e8-5a5487e908d8\") " Jan 31 14:57:51 crc kubenswrapper[4751]: I0131 14:57:51.211259 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff5e8bad-e481-445e-99e8-5a5487e908d8-kube-api-access-7hq52" (OuterVolumeSpecName: "kube-api-access-7hq52") pod "ff5e8bad-e481-445e-99e8-5a5487e908d8" (UID: "ff5e8bad-e481-445e-99e8-5a5487e908d8"). InnerVolumeSpecName "kube-api-access-7hq52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:57:51 crc kubenswrapper[4751]: I0131 14:57:51.234489 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ff5e8bad-e481-445e-99e8-5a5487e908d8-config-data" (OuterVolumeSpecName: "config-data") pod "ff5e8bad-e481-445e-99e8-5a5487e908d8" (UID: "ff5e8bad-e481-445e-99e8-5a5487e908d8"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:57:51 crc kubenswrapper[4751]: I0131 14:57:51.307956 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ff5e8bad-e481-445e-99e8-5a5487e908d8-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:51 crc kubenswrapper[4751]: I0131 14:57:51.307991 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hq52\" (UniqueName: \"kubernetes.io/projected/ff5e8bad-e481-445e-99e8-5a5487e908d8-kube-api-access-7hq52\") on node \"crc\" DevicePath \"\"" Jan 31 14:57:51 crc kubenswrapper[4751]: I0131 14:57:51.855397 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-db-sync-56bwv" event={"ID":"ff5e8bad-e481-445e-99e8-5a5487e908d8","Type":"ContainerDied","Data":"ecc8a96f4fea25c974208298436bf510666bd64b0748730b17c3c0f483b01a86"} Jan 31 14:57:51 crc kubenswrapper[4751]: I0131 14:57:51.855451 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ecc8a96f4fea25c974208298436bf510666bd64b0748730b17c3c0f483b01a86" Jan 31 14:57:51 crc kubenswrapper[4751]: I0131 14:57:51.855507 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-db-sync-56bwv" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.068839 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-hnxnd"] Jan 31 14:57:52 crc kubenswrapper[4751]: E0131 14:57:52.069159 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec59a88-8f4d-4482-aa2a-11a508cc3a79" containerName="util" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.069176 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec59a88-8f4d-4482-aa2a-11a508cc3a79" containerName="util" Jan 31 14:57:52 crc kubenswrapper[4751]: E0131 14:57:52.069193 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886303a3-d05b-4551-bd03-ebc2e2aef77c" containerName="pull" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.069202 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="886303a3-d05b-4551-bd03-ebc2e2aef77c" containerName="pull" Jan 31 14:57:52 crc kubenswrapper[4751]: E0131 14:57:52.069217 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886303a3-d05b-4551-bd03-ebc2e2aef77c" containerName="extract" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.069224 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="886303a3-d05b-4551-bd03-ebc2e2aef77c" containerName="extract" Jan 31 14:57:52 crc kubenswrapper[4751]: E0131 14:57:52.069237 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff5e8bad-e481-445e-99e8-5a5487e908d8" containerName="keystone-db-sync" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.069245 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff5e8bad-e481-445e-99e8-5a5487e908d8" containerName="keystone-db-sync" Jan 31 14:57:52 crc kubenswrapper[4751]: E0131 14:57:52.069257 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="886303a3-d05b-4551-bd03-ebc2e2aef77c" containerName="util" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.069265 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="886303a3-d05b-4551-bd03-ebc2e2aef77c" containerName="util" Jan 31 14:57:52 crc kubenswrapper[4751]: E0131 14:57:52.069278 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec59a88-8f4d-4482-aa2a-11a508cc3a79" containerName="pull" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.069286 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec59a88-8f4d-4482-aa2a-11a508cc3a79" containerName="pull" Jan 31 14:57:52 crc kubenswrapper[4751]: E0131 14:57:52.069296 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eec59a88-8f4d-4482-aa2a-11a508cc3a79" containerName="extract" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.069304 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eec59a88-8f4d-4482-aa2a-11a508cc3a79" containerName="extract" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.069431 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff5e8bad-e481-445e-99e8-5a5487e908d8" containerName="keystone-db-sync" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.069449 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="886303a3-d05b-4551-bd03-ebc2e2aef77c" containerName="extract" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.069465 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="eec59a88-8f4d-4482-aa2a-11a508cc3a79" containerName="extract" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.069985 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.072257 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"osp-secret" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.072455 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.072551 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.072666 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.073063 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-szsvc" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.090483 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-hnxnd"] Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.127849 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-credential-keys\") pod \"keystone-bootstrap-hnxnd\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.127913 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-fernet-keys\") pod \"keystone-bootstrap-hnxnd\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.127934 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lxls\" (UniqueName: \"kubernetes.io/projected/041ede36-25a1-4d6d-9de2-d16218c5fc67-kube-api-access-9lxls\") pod \"keystone-bootstrap-hnxnd\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.128250 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-config-data\") pod \"keystone-bootstrap-hnxnd\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.128352 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-scripts\") pod \"keystone-bootstrap-hnxnd\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.229899 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-scripts\") pod \"keystone-bootstrap-hnxnd\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.230007 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-credential-keys\") pod \"keystone-bootstrap-hnxnd\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.230051 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-fernet-keys\") pod \"keystone-bootstrap-hnxnd\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.230091 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lxls\" (UniqueName: \"kubernetes.io/projected/041ede36-25a1-4d6d-9de2-d16218c5fc67-kube-api-access-9lxls\") pod \"keystone-bootstrap-hnxnd\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.230141 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-config-data\") pod \"keystone-bootstrap-hnxnd\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.248175 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-fernet-keys\") pod \"keystone-bootstrap-hnxnd\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.248644 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-scripts\") pod \"keystone-bootstrap-hnxnd\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.248901 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-config-data\") pod \"keystone-bootstrap-hnxnd\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.251542 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-credential-keys\") pod \"keystone-bootstrap-hnxnd\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.254710 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lxls\" (UniqueName: \"kubernetes.io/projected/041ede36-25a1-4d6d-9de2-d16218c5fc67-kube-api-access-9lxls\") pod \"keystone-bootstrap-hnxnd\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.397868 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.819480 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-hnxnd"] Jan 31 14:57:52 crc kubenswrapper[4751]: W0131 14:57:52.829399 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod041ede36_25a1_4d6d_9de2_d16218c5fc67.slice/crio-abf3888aa3724ab2f3cae0622ef9b13caeec62afa882cdfaa1640df9c86f1488 WatchSource:0}: Error finding container abf3888aa3724ab2f3cae0622ef9b13caeec62afa882cdfaa1640df9c86f1488: Status 404 returned error can't find the container with id abf3888aa3724ab2f3cae0622ef9b13caeec62afa882cdfaa1640df9c86f1488 Jan 31 14:57:52 crc kubenswrapper[4751]: I0131 14:57:52.863362 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" event={"ID":"041ede36-25a1-4d6d-9de2-d16218c5fc67","Type":"ContainerStarted","Data":"abf3888aa3724ab2f3cae0622ef9b13caeec62afa882cdfaa1640df9c86f1488"} Jan 31 14:57:55 crc kubenswrapper[4751]: I0131 14:57:55.888933 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" event={"ID":"041ede36-25a1-4d6d-9de2-d16218c5fc67","Type":"ContainerStarted","Data":"be0ffdbf0de55d407a928a375e5355c5f5a9cda93c0fc7ee45e3254cddeefdc8"} Jan 31 14:57:55 crc kubenswrapper[4751]: I0131 14:57:55.910896 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" podStartSLOduration=3.910875851 podStartE2EDuration="3.910875851s" podCreationTimestamp="2026-01-31 14:57:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:57:55.905983092 +0000 UTC m=+980.280695977" watchObservedRunningTime="2026-01-31 14:57:55.910875851 +0000 UTC m=+980.285588736" Jan 31 14:57:58 crc kubenswrapper[4751]: I0131 14:57:58.909276 4751 generic.go:334] "Generic (PLEG): container finished" podID="041ede36-25a1-4d6d-9de2-d16218c5fc67" containerID="be0ffdbf0de55d407a928a375e5355c5f5a9cda93c0fc7ee45e3254cddeefdc8" exitCode=0 Jan 31 14:57:58 crc kubenswrapper[4751]: I0131 14:57:58.909389 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" event={"ID":"041ede36-25a1-4d6d-9de2-d16218c5fc67","Type":"ContainerDied","Data":"be0ffdbf0de55d407a928a375e5355c5f5a9cda93c0fc7ee45e3254cddeefdc8"} Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.153332 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.335281 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9lxls\" (UniqueName: \"kubernetes.io/projected/041ede36-25a1-4d6d-9de2-d16218c5fc67-kube-api-access-9lxls\") pod \"041ede36-25a1-4d6d-9de2-d16218c5fc67\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.335361 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-config-data\") pod \"041ede36-25a1-4d6d-9de2-d16218c5fc67\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.335409 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-credential-keys\") pod \"041ede36-25a1-4d6d-9de2-d16218c5fc67\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.335473 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-scripts\") pod \"041ede36-25a1-4d6d-9de2-d16218c5fc67\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.335526 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-fernet-keys\") pod \"041ede36-25a1-4d6d-9de2-d16218c5fc67\" (UID: \"041ede36-25a1-4d6d-9de2-d16218c5fc67\") " Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.340140 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-scripts" (OuterVolumeSpecName: "scripts") pod "041ede36-25a1-4d6d-9de2-d16218c5fc67" (UID: "041ede36-25a1-4d6d-9de2-d16218c5fc67"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.340657 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/041ede36-25a1-4d6d-9de2-d16218c5fc67-kube-api-access-9lxls" (OuterVolumeSpecName: "kube-api-access-9lxls") pod "041ede36-25a1-4d6d-9de2-d16218c5fc67" (UID: "041ede36-25a1-4d6d-9de2-d16218c5fc67"). InnerVolumeSpecName "kube-api-access-9lxls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.341223 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "041ede36-25a1-4d6d-9de2-d16218c5fc67" (UID: "041ede36-25a1-4d6d-9de2-d16218c5fc67"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.347298 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "041ede36-25a1-4d6d-9de2-d16218c5fc67" (UID: "041ede36-25a1-4d6d-9de2-d16218c5fc67"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.360822 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-config-data" (OuterVolumeSpecName: "config-data") pod "041ede36-25a1-4d6d-9de2-d16218c5fc67" (UID: "041ede36-25a1-4d6d-9de2-d16218c5fc67"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.437781 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9lxls\" (UniqueName: \"kubernetes.io/projected/041ede36-25a1-4d6d-9de2-d16218c5fc67-kube-api-access-9lxls\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.437814 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.437824 4751 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.437832 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.437840 4751 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/041ede36-25a1-4d6d-9de2-d16218c5fc67-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.925225 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" event={"ID":"041ede36-25a1-4d6d-9de2-d16218c5fc67","Type":"ContainerDied","Data":"abf3888aa3724ab2f3cae0622ef9b13caeec62afa882cdfaa1640df9c86f1488"} Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.925560 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="abf3888aa3724ab2f3cae0622ef9b13caeec62afa882cdfaa1640df9c86f1488" Jan 31 14:58:00 crc kubenswrapper[4751]: I0131 14:58:00.925280 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-bootstrap-hnxnd" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.016802 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-859d455469-zqqzw"] Jan 31 14:58:01 crc kubenswrapper[4751]: E0131 14:58:01.017234 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="041ede36-25a1-4d6d-9de2-d16218c5fc67" containerName="keystone-bootstrap" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.017297 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="041ede36-25a1-4d6d-9de2-d16218c5fc67" containerName="keystone-bootstrap" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.017495 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="041ede36-25a1-4d6d-9de2-d16218c5fc67" containerName="keystone-bootstrap" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.017949 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.022466 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-config-data" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.023219 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-keystone-dockercfg-szsvc" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.023250 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone-scripts" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.023858 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"keystone" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.039324 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-859d455469-zqqzw"] Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.045803 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-scripts\") pod \"keystone-859d455469-zqqzw\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.046024 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-credential-keys\") pod \"keystone-859d455469-zqqzw\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.046146 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-fernet-keys\") pod \"keystone-859d455469-zqqzw\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.046225 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-config-data\") pod \"keystone-859d455469-zqqzw\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.046304 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46qfq\" (UniqueName: \"kubernetes.io/projected/dabb55da-08db-4d2a-8b2d-ac7b2b657053-kube-api-access-46qfq\") pod \"keystone-859d455469-zqqzw\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.146821 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-scripts\") pod \"keystone-859d455469-zqqzw\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.146873 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-credential-keys\") pod \"keystone-859d455469-zqqzw\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.146901 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-fernet-keys\") pod \"keystone-859d455469-zqqzw\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.146917 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-config-data\") pod \"keystone-859d455469-zqqzw\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.146951 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-46qfq\" (UniqueName: \"kubernetes.io/projected/dabb55da-08db-4d2a-8b2d-ac7b2b657053-kube-api-access-46qfq\") pod \"keystone-859d455469-zqqzw\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.150670 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-credential-keys\") pod \"keystone-859d455469-zqqzw\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.150789 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-config-data\") pod \"keystone-859d455469-zqqzw\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.151352 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-scripts\") pod \"keystone-859d455469-zqqzw\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.151672 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-fernet-keys\") pod \"keystone-859d455469-zqqzw\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.164706 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-46qfq\" (UniqueName: \"kubernetes.io/projected/dabb55da-08db-4d2a-8b2d-ac7b2b657053-kube-api-access-46qfq\") pod \"keystone-859d455469-zqqzw\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.333206 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.779353 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-859d455469-zqqzw"] Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.932437 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-859d455469-zqqzw" event={"ID":"dabb55da-08db-4d2a-8b2d-ac7b2b657053","Type":"ContainerStarted","Data":"6d283eaf7e9a4eadb7f123ebbd0723c09363494de09f8fb76c6271216f1a8ecb"} Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.932482 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-859d455469-zqqzw" event={"ID":"dabb55da-08db-4d2a-8b2d-ac7b2b657053","Type":"ContainerStarted","Data":"02e1eb0fcf9c093b28dd6fc9f0fb02613d1865a02336d6e8e82c2fa50f8597a7"} Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.932588 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:01 crc kubenswrapper[4751]: I0131 14:58:01.947938 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-859d455469-zqqzw" podStartSLOduration=1.947917205 podStartE2EDuration="1.947917205s" podCreationTimestamp="2026-01-31 14:58:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:58:01.947306669 +0000 UTC m=+986.322019564" watchObservedRunningTime="2026-01-31 14:58:01.947917205 +0000 UTC m=+986.322630090" Jan 31 14:58:04 crc kubenswrapper[4751]: I0131 14:58:04.304904 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/swift-operator-controller-manager-59595cd-9djr5"] Jan 31 14:58:04 crc kubenswrapper[4751]: I0131 14:58:04.306030 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" Jan 31 14:58:04 crc kubenswrapper[4751]: I0131 14:58:04.307863 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-dockercfg-zkbxh" Jan 31 14:58:04 crc kubenswrapper[4751]: I0131 14:58:04.308173 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"swift-operator-controller-manager-service-cert" Jan 31 14:58:04 crc kubenswrapper[4751]: I0131 14:58:04.322943 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-59595cd-9djr5"] Jan 31 14:58:04 crc kubenswrapper[4751]: I0131 14:58:04.496539 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f3dfaad-d451-448b-a447-47fc7bbff0e5-apiservice-cert\") pod \"swift-operator-controller-manager-59595cd-9djr5\" (UID: \"9f3dfaad-d451-448b-a447-47fc7bbff0e5\") " pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" Jan 31 14:58:04 crc kubenswrapper[4751]: I0131 14:58:04.496612 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f3dfaad-d451-448b-a447-47fc7bbff0e5-webhook-cert\") pod \"swift-operator-controller-manager-59595cd-9djr5\" (UID: \"9f3dfaad-d451-448b-a447-47fc7bbff0e5\") " pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" Jan 31 14:58:04 crc kubenswrapper[4751]: I0131 14:58:04.496672 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-67l2j\" (UniqueName: \"kubernetes.io/projected/9f3dfaad-d451-448b-a447-47fc7bbff0e5-kube-api-access-67l2j\") pod \"swift-operator-controller-manager-59595cd-9djr5\" (UID: \"9f3dfaad-d451-448b-a447-47fc7bbff0e5\") " pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" Jan 31 14:58:04 crc kubenswrapper[4751]: I0131 14:58:04.597502 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f3dfaad-d451-448b-a447-47fc7bbff0e5-apiservice-cert\") pod \"swift-operator-controller-manager-59595cd-9djr5\" (UID: \"9f3dfaad-d451-448b-a447-47fc7bbff0e5\") " pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" Jan 31 14:58:04 crc kubenswrapper[4751]: I0131 14:58:04.597558 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f3dfaad-d451-448b-a447-47fc7bbff0e5-webhook-cert\") pod \"swift-operator-controller-manager-59595cd-9djr5\" (UID: \"9f3dfaad-d451-448b-a447-47fc7bbff0e5\") " pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" Jan 31 14:58:04 crc kubenswrapper[4751]: I0131 14:58:04.597588 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-67l2j\" (UniqueName: \"kubernetes.io/projected/9f3dfaad-d451-448b-a447-47fc7bbff0e5-kube-api-access-67l2j\") pod \"swift-operator-controller-manager-59595cd-9djr5\" (UID: \"9f3dfaad-d451-448b-a447-47fc7bbff0e5\") " pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" Jan 31 14:58:04 crc kubenswrapper[4751]: I0131 14:58:04.603537 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f3dfaad-d451-448b-a447-47fc7bbff0e5-apiservice-cert\") pod \"swift-operator-controller-manager-59595cd-9djr5\" (UID: \"9f3dfaad-d451-448b-a447-47fc7bbff0e5\") " pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" Jan 31 14:58:04 crc kubenswrapper[4751]: I0131 14:58:04.606602 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f3dfaad-d451-448b-a447-47fc7bbff0e5-webhook-cert\") pod \"swift-operator-controller-manager-59595cd-9djr5\" (UID: \"9f3dfaad-d451-448b-a447-47fc7bbff0e5\") " pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" Jan 31 14:58:04 crc kubenswrapper[4751]: I0131 14:58:04.613644 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-67l2j\" (UniqueName: \"kubernetes.io/projected/9f3dfaad-d451-448b-a447-47fc7bbff0e5-kube-api-access-67l2j\") pod \"swift-operator-controller-manager-59595cd-9djr5\" (UID: \"9f3dfaad-d451-448b-a447-47fc7bbff0e5\") " pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" Jan 31 14:58:04 crc kubenswrapper[4751]: I0131 14:58:04.621561 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" Jan 31 14:58:05 crc kubenswrapper[4751]: I0131 14:58:05.097928 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/swift-operator-controller-manager-59595cd-9djr5"] Jan 31 14:58:05 crc kubenswrapper[4751]: I0131 14:58:05.107014 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 14:58:05 crc kubenswrapper[4751]: I0131 14:58:05.967728 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" event={"ID":"9f3dfaad-d451-448b-a447-47fc7bbff0e5","Type":"ContainerStarted","Data":"da3b689c07e135768fb2bc22c72ffa9872cf722e04a986707e86515f65114b9c"} Jan 31 14:58:07 crc kubenswrapper[4751]: I0131 14:58:07.990161 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" event={"ID":"9f3dfaad-d451-448b-a447-47fc7bbff0e5","Type":"ContainerStarted","Data":"668d137892e68f6f4b2298a804a817a3b16a09cc4c85201f8a03fca82e38e755"} Jan 31 14:58:07 crc kubenswrapper[4751]: I0131 14:58:07.990533 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" Jan 31 14:58:08 crc kubenswrapper[4751]: I0131 14:58:08.024997 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" podStartSLOduration=2.059231994 podStartE2EDuration="4.024959423s" podCreationTimestamp="2026-01-31 14:58:04 +0000 UTC" firstStartedPulling="2026-01-31 14:58:05.106816771 +0000 UTC m=+989.481529656" lastFinishedPulling="2026-01-31 14:58:07.0725442 +0000 UTC m=+991.447257085" observedRunningTime="2026-01-31 14:58:08.01572853 +0000 UTC m=+992.390441485" watchObservedRunningTime="2026-01-31 14:58:08.024959423 +0000 UTC m=+992.399672388" Jan 31 14:58:08 crc kubenswrapper[4751]: I0131 14:58:08.897044 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:58:08 crc kubenswrapper[4751]: I0131 14:58:08.897165 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:58:13 crc kubenswrapper[4751]: I0131 14:58:13.594594 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7"] Jan 31 14:58:13 crc kubenswrapper[4751]: I0131 14:58:13.595556 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" Jan 31 14:58:13 crc kubenswrapper[4751]: I0131 14:58:13.597254 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-service-cert" Jan 31 14:58:13 crc kubenswrapper[4751]: I0131 14:58:13.597374 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"horizon-operator-controller-manager-dockercfg-fdxgb" Jan 31 14:58:13 crc kubenswrapper[4751]: I0131 14:58:13.611229 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7"] Jan 31 14:58:13 crc kubenswrapper[4751]: I0131 14:58:13.689918 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-f8fsr\" (UniqueName: \"kubernetes.io/projected/91cc4333-403a-4ce4-a347-8b475ad0169a-kube-api-access-f8fsr\") pod \"horizon-operator-controller-manager-6d4cb5b58-r8xn7\" (UID: \"91cc4333-403a-4ce4-a347-8b475ad0169a\") " pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" Jan 31 14:58:13 crc kubenswrapper[4751]: I0131 14:58:13.689993 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/91cc4333-403a-4ce4-a347-8b475ad0169a-apiservice-cert\") pod \"horizon-operator-controller-manager-6d4cb5b58-r8xn7\" (UID: \"91cc4333-403a-4ce4-a347-8b475ad0169a\") " pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" Jan 31 14:58:13 crc kubenswrapper[4751]: I0131 14:58:13.690215 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91cc4333-403a-4ce4-a347-8b475ad0169a-webhook-cert\") pod \"horizon-operator-controller-manager-6d4cb5b58-r8xn7\" (UID: \"91cc4333-403a-4ce4-a347-8b475ad0169a\") " pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" Jan 31 14:58:13 crc kubenswrapper[4751]: I0131 14:58:13.791278 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91cc4333-403a-4ce4-a347-8b475ad0169a-webhook-cert\") pod \"horizon-operator-controller-manager-6d4cb5b58-r8xn7\" (UID: \"91cc4333-403a-4ce4-a347-8b475ad0169a\") " pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" Jan 31 14:58:13 crc kubenswrapper[4751]: I0131 14:58:13.791334 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-f8fsr\" (UniqueName: \"kubernetes.io/projected/91cc4333-403a-4ce4-a347-8b475ad0169a-kube-api-access-f8fsr\") pod \"horizon-operator-controller-manager-6d4cb5b58-r8xn7\" (UID: \"91cc4333-403a-4ce4-a347-8b475ad0169a\") " pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" Jan 31 14:58:13 crc kubenswrapper[4751]: I0131 14:58:13.791384 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/91cc4333-403a-4ce4-a347-8b475ad0169a-apiservice-cert\") pod \"horizon-operator-controller-manager-6d4cb5b58-r8xn7\" (UID: \"91cc4333-403a-4ce4-a347-8b475ad0169a\") " pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" Jan 31 14:58:13 crc kubenswrapper[4751]: I0131 14:58:13.796491 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/91cc4333-403a-4ce4-a347-8b475ad0169a-webhook-cert\") pod \"horizon-operator-controller-manager-6d4cb5b58-r8xn7\" (UID: \"91cc4333-403a-4ce4-a347-8b475ad0169a\") " pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" Jan 31 14:58:13 crc kubenswrapper[4751]: I0131 14:58:13.802710 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/91cc4333-403a-4ce4-a347-8b475ad0169a-apiservice-cert\") pod \"horizon-operator-controller-manager-6d4cb5b58-r8xn7\" (UID: \"91cc4333-403a-4ce4-a347-8b475ad0169a\") " pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" Jan 31 14:58:13 crc kubenswrapper[4751]: I0131 14:58:13.807764 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-f8fsr\" (UniqueName: \"kubernetes.io/projected/91cc4333-403a-4ce4-a347-8b475ad0169a-kube-api-access-f8fsr\") pod \"horizon-operator-controller-manager-6d4cb5b58-r8xn7\" (UID: \"91cc4333-403a-4ce4-a347-8b475ad0169a\") " pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" Jan 31 14:58:13 crc kubenswrapper[4751]: I0131 14:58:13.914487 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" Jan 31 14:58:14 crc kubenswrapper[4751]: I0131 14:58:14.318727 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7"] Jan 31 14:58:14 crc kubenswrapper[4751]: I0131 14:58:14.627572 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" Jan 31 14:58:15 crc kubenswrapper[4751]: I0131 14:58:15.049674 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" event={"ID":"91cc4333-403a-4ce4-a347-8b475ad0169a","Type":"ContainerStarted","Data":"db6a2307fd9e1cecbdc1efd47215efe13bdd3558905a52adfb7808e20c228b72"} Jan 31 14:58:17 crc kubenswrapper[4751]: I0131 14:58:17.065188 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" event={"ID":"91cc4333-403a-4ce4-a347-8b475ad0169a","Type":"ContainerStarted","Data":"54a3468fd01d3d2ceae1f10629ed129d49cf0c01e0869142914b381d771228f2"} Jan 31 14:58:17 crc kubenswrapper[4751]: I0131 14:58:17.065726 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" Jan 31 14:58:17 crc kubenswrapper[4751]: I0131 14:58:17.084907 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" podStartSLOduration=1.8851242080000001 podStartE2EDuration="4.084889204s" podCreationTimestamp="2026-01-31 14:58:13 +0000 UTC" firstStartedPulling="2026-01-31 14:58:14.321618049 +0000 UTC m=+998.696330934" lastFinishedPulling="2026-01-31 14:58:16.521383045 +0000 UTC m=+1000.896095930" observedRunningTime="2026-01-31 14:58:17.08437527 +0000 UTC m=+1001.459088155" watchObservedRunningTime="2026-01-31 14:58:17.084889204 +0000 UTC m=+1001.459602079" Jan 31 14:58:20 crc kubenswrapper[4751]: I0131 14:58:20.908622 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Jan 31 14:58:20 crc kubenswrapper[4751]: I0131 14:58:20.917744 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:20 crc kubenswrapper[4751]: I0131 14:58:20.922671 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-files" Jan 31 14:58:20 crc kubenswrapper[4751]: I0131 14:58:20.923907 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-conf" Jan 31 14:58:20 crc kubenswrapper[4751]: I0131 14:58:20.924268 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-storage-config-data" Jan 31 14:58:20 crc kubenswrapper[4751]: I0131 14:58:20.925629 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-swift-dockercfg-nwfcb" Jan 31 14:58:20 crc kubenswrapper[4751]: I0131 14:58:20.947083 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.095590 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.095805 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kqvvn\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-kube-api-access-kqvvn\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.095917 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.096045 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/440e5809-7b49-4b21-99dd-668468c84017-cache\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.096192 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/440e5809-7b49-4b21-99dd-668468c84017-lock\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.197813 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/440e5809-7b49-4b21-99dd-668468c84017-cache\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.197893 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/440e5809-7b49-4b21-99dd-668468c84017-lock\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.197931 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.197953 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kqvvn\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-kube-api-access-kqvvn\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.197969 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: E0131 14:58:21.198102 4751 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 14:58:21 crc kubenswrapper[4751]: E0131 14:58:21.198114 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 14:58:21 crc kubenswrapper[4751]: E0131 14:58:21.198155 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift podName:440e5809-7b49-4b21-99dd-668468c84017 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:21.698140724 +0000 UTC m=+1006.072853599 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift") pod "swift-storage-0" (UID: "440e5809-7b49-4b21-99dd-668468c84017") : configmap "swift-ring-files" not found Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.198544 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/440e5809-7b49-4b21-99dd-668468c84017-cache\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.198605 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") device mount path \"/mnt/openstack/pv14\"" pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.210572 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/440e5809-7b49-4b21-99dd-668468c84017-lock\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.220723 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.245175 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kqvvn\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-kube-api-access-kqvvn\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: I0131 14:58:21.704694 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:21 crc kubenswrapper[4751]: E0131 14:58:21.704895 4751 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 14:58:21 crc kubenswrapper[4751]: E0131 14:58:21.704924 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 14:58:21 crc kubenswrapper[4751]: E0131 14:58:21.704988 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift podName:440e5809-7b49-4b21-99dd-668468c84017 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:22.704970521 +0000 UTC m=+1007.079683416 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift") pod "swift-storage-0" (UID: "440e5809-7b49-4b21-99dd-668468c84017") : configmap "swift-ring-files" not found Jan 31 14:58:22 crc kubenswrapper[4751]: I0131 14:58:22.521295 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-index-bvvpv"] Jan 31 14:58:22 crc kubenswrapper[4751]: I0131 14:58:22.524209 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-bvvpv" Jan 31 14:58:22 crc kubenswrapper[4751]: I0131 14:58:22.527367 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-index-dockercfg-jtwrn" Jan 31 14:58:22 crc kubenswrapper[4751]: I0131 14:58:22.556642 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-bvvpv"] Jan 31 14:58:22 crc kubenswrapper[4751]: I0131 14:58:22.720350 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:22 crc kubenswrapper[4751]: I0131 14:58:22.720480 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtvgv\" (UniqueName: \"kubernetes.io/projected/eacc0c6c-95c4-487f-945e-4a1e3e17c508-kube-api-access-vtvgv\") pod \"glance-operator-index-bvvpv\" (UID: \"eacc0c6c-95c4-487f-945e-4a1e3e17c508\") " pod="openstack-operators/glance-operator-index-bvvpv" Jan 31 14:58:22 crc kubenswrapper[4751]: E0131 14:58:22.720640 4751 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 14:58:22 crc kubenswrapper[4751]: E0131 14:58:22.720668 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 14:58:22 crc kubenswrapper[4751]: E0131 14:58:22.720743 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift podName:440e5809-7b49-4b21-99dd-668468c84017 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:24.720717261 +0000 UTC m=+1009.095430156 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift") pod "swift-storage-0" (UID: "440e5809-7b49-4b21-99dd-668468c84017") : configmap "swift-ring-files" not found Jan 31 14:58:22 crc kubenswrapper[4751]: I0131 14:58:22.822120 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtvgv\" (UniqueName: \"kubernetes.io/projected/eacc0c6c-95c4-487f-945e-4a1e3e17c508-kube-api-access-vtvgv\") pod \"glance-operator-index-bvvpv\" (UID: \"eacc0c6c-95c4-487f-945e-4a1e3e17c508\") " pod="openstack-operators/glance-operator-index-bvvpv" Jan 31 14:58:22 crc kubenswrapper[4751]: I0131 14:58:22.858973 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtvgv\" (UniqueName: \"kubernetes.io/projected/eacc0c6c-95c4-487f-945e-4a1e3e17c508-kube-api-access-vtvgv\") pod \"glance-operator-index-bvvpv\" (UID: \"eacc0c6c-95c4-487f-945e-4a1e3e17c508\") " pod="openstack-operators/glance-operator-index-bvvpv" Jan 31 14:58:23 crc kubenswrapper[4751]: I0131 14:58:23.156940 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-bvvpv" Jan 31 14:58:23 crc kubenswrapper[4751]: I0131 14:58:23.380034 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-index-bvvpv"] Jan 31 14:58:23 crc kubenswrapper[4751]: I0131 14:58:23.918693 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/horizon-operator-controller-manager-6d4cb5b58-r8xn7" Jan 31 14:58:24 crc kubenswrapper[4751]: I0131 14:58:24.130924 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-bvvpv" event={"ID":"eacc0c6c-95c4-487f-945e-4a1e3e17c508","Type":"ContainerStarted","Data":"701664b77023940ba4b0968a1f7dc87bd2c93fe4b8f5f2f39b4e39a24e4b2f4b"} Jan 31 14:58:24 crc kubenswrapper[4751]: I0131 14:58:24.752823 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:24 crc kubenswrapper[4751]: E0131 14:58:24.752989 4751 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 14:58:24 crc kubenswrapper[4751]: E0131 14:58:24.753001 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 14:58:24 crc kubenswrapper[4751]: E0131 14:58:24.753047 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift podName:440e5809-7b49-4b21-99dd-668468c84017 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:28.75303457 +0000 UTC m=+1013.127747455 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift") pod "swift-storage-0" (UID: "440e5809-7b49-4b21-99dd-668468c84017") : configmap "swift-ring-files" not found Jan 31 14:58:24 crc kubenswrapper[4751]: I0131 14:58:24.909357 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-z72xp"] Jan 31 14:58:24 crc kubenswrapper[4751]: I0131 14:58:24.910442 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:24 crc kubenswrapper[4751]: I0131 14:58:24.912886 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-config-data" Jan 31 14:58:24 crc kubenswrapper[4751]: I0131 14:58:24.913046 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"swift-ring-scripts" Jan 31 14:58:24 crc kubenswrapper[4751]: I0131 14:58:24.913053 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"swift-proxy-config-data" Jan 31 14:58:24 crc kubenswrapper[4751]: I0131 14:58:24.927536 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-z72xp"] Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.057285 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/606aa4a9-2afe-4f51-a562-90f716040b58-swiftconf\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.057364 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/606aa4a9-2afe-4f51-a562-90f716040b58-etc-swift\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.057417 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/606aa4a9-2afe-4f51-a562-90f716040b58-scripts\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.057503 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-84kdw\" (UniqueName: \"kubernetes.io/projected/606aa4a9-2afe-4f51-a562-90f716040b58-kube-api-access-84kdw\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.057538 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/606aa4a9-2afe-4f51-a562-90f716040b58-dispersionconf\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.057567 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/606aa4a9-2afe-4f51-a562-90f716040b58-ring-data-devices\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.159130 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/606aa4a9-2afe-4f51-a562-90f716040b58-scripts\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.159192 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-84kdw\" (UniqueName: \"kubernetes.io/projected/606aa4a9-2afe-4f51-a562-90f716040b58-kube-api-access-84kdw\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.159216 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/606aa4a9-2afe-4f51-a562-90f716040b58-dispersionconf\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.159241 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/606aa4a9-2afe-4f51-a562-90f716040b58-ring-data-devices\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.159308 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/606aa4a9-2afe-4f51-a562-90f716040b58-swiftconf\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.159331 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/606aa4a9-2afe-4f51-a562-90f716040b58-etc-swift\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.159806 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/606aa4a9-2afe-4f51-a562-90f716040b58-etc-swift\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.160210 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/606aa4a9-2afe-4f51-a562-90f716040b58-scripts\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.160651 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/606aa4a9-2afe-4f51-a562-90f716040b58-ring-data-devices\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.167566 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/606aa4a9-2afe-4f51-a562-90f716040b58-swiftconf\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.169607 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/606aa4a9-2afe-4f51-a562-90f716040b58-dispersionconf\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.182867 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-84kdw\" (UniqueName: \"kubernetes.io/projected/606aa4a9-2afe-4f51-a562-90f716040b58-kube-api-access-84kdw\") pod \"swift-ring-rebalance-z72xp\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:25 crc kubenswrapper[4751]: I0131 14:58:25.229929 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:26 crc kubenswrapper[4751]: I0131 14:58:26.237479 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-z72xp"] Jan 31 14:58:27 crc kubenswrapper[4751]: I0131 14:58:27.161852 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-bvvpv" event={"ID":"eacc0c6c-95c4-487f-945e-4a1e3e17c508","Type":"ContainerStarted","Data":"432c19d38a03c6f3813c47fccb7f600a3290b65e75e89319b377dead29257091"} Jan 31 14:58:27 crc kubenswrapper[4751]: I0131 14:58:27.164565 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" event={"ID":"606aa4a9-2afe-4f51-a562-90f716040b58","Type":"ContainerStarted","Data":"6d59aea6f88c0b1dfe68e8fb50352f164b60343fcb2577fd3636718b016322c8"} Jan 31 14:58:27 crc kubenswrapper[4751]: I0131 14:58:27.181826 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-index-bvvpv" podStartSLOduration=1.597177464 podStartE2EDuration="5.181807642s" podCreationTimestamp="2026-01-31 14:58:22 +0000 UTC" firstStartedPulling="2026-01-31 14:58:23.389503719 +0000 UTC m=+1007.764216594" lastFinishedPulling="2026-01-31 14:58:26.974133887 +0000 UTC m=+1011.348846772" observedRunningTime="2026-01-31 14:58:27.177440857 +0000 UTC m=+1011.552153762" watchObservedRunningTime="2026-01-31 14:58:27.181807642 +0000 UTC m=+1011.556520557" Jan 31 14:58:28 crc kubenswrapper[4751]: I0131 14:58:28.815861 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:28 crc kubenswrapper[4751]: E0131 14:58:28.816009 4751 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 14:58:28 crc kubenswrapper[4751]: E0131 14:58:28.816152 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 14:58:28 crc kubenswrapper[4751]: E0131 14:58:28.816206 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift podName:440e5809-7b49-4b21-99dd-668468c84017 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:36.816187701 +0000 UTC m=+1021.190900586 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift") pod "swift-storage-0" (UID: "440e5809-7b49-4b21-99dd-668468c84017") : configmap "swift-ring-files" not found Jan 31 14:58:32 crc kubenswrapper[4751]: I0131 14:58:32.781590 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 14:58:33 crc kubenswrapper[4751]: I0131 14:58:33.157876 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-index-bvvpv" Jan 31 14:58:33 crc kubenswrapper[4751]: I0131 14:58:33.157965 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openstack-operators/glance-operator-index-bvvpv" Jan 31 14:58:33 crc kubenswrapper[4751]: I0131 14:58:33.182569 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openstack-operators/glance-operator-index-bvvpv" Jan 31 14:58:33 crc kubenswrapper[4751]: I0131 14:58:33.239293 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-index-bvvpv" Jan 31 14:58:34 crc kubenswrapper[4751]: I0131 14:58:34.218527 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" event={"ID":"606aa4a9-2afe-4f51-a562-90f716040b58","Type":"ContainerStarted","Data":"d7982a0dd9c095e8b3eb11a8ff02587d379ddd34791962ab93f48b60a33bec98"} Jan 31 14:58:34 crc kubenswrapper[4751]: I0131 14:58:34.238053 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" podStartSLOduration=3.72631647 podStartE2EDuration="10.238034396s" podCreationTimestamp="2026-01-31 14:58:24 +0000 UTC" firstStartedPulling="2026-01-31 14:58:26.842458052 +0000 UTC m=+1011.217170937" lastFinishedPulling="2026-01-31 14:58:33.354175958 +0000 UTC m=+1017.728888863" observedRunningTime="2026-01-31 14:58:34.236176037 +0000 UTC m=+1018.610888932" watchObservedRunningTime="2026-01-31 14:58:34.238034396 +0000 UTC m=+1018.612747281" Jan 31 14:58:36 crc kubenswrapper[4751]: I0131 14:58:36.851615 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:36 crc kubenswrapper[4751]: E0131 14:58:36.851796 4751 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 14:58:36 crc kubenswrapper[4751]: E0131 14:58:36.852214 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-storage-0: configmap "swift-ring-files" not found Jan 31 14:58:36 crc kubenswrapper[4751]: E0131 14:58:36.852291 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift podName:440e5809-7b49-4b21-99dd-668468c84017 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:52.85226761 +0000 UTC m=+1037.226980515 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift") pod "swift-storage-0" (UID: "440e5809-7b49-4b21-99dd-668468c84017") : configmap "swift-ring-files" not found Jan 31 14:58:37 crc kubenswrapper[4751]: I0131 14:58:37.884640 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/swift-proxy-6d699db77c-58vrl"] Jan 31 14:58:37 crc kubenswrapper[4751]: I0131 14:58:37.886157 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:37 crc kubenswrapper[4751]: I0131 14:58:37.896267 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-proxy-6d699db77c-58vrl"] Jan 31 14:58:37 crc kubenswrapper[4751]: I0131 14:58:37.967714 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ee66f9-5607-4559-9a64-6767dfbcc078-run-httpd\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:37 crc kubenswrapper[4751]: I0131 14:58:37.967837 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ee66f9-5607-4559-9a64-6767dfbcc078-log-httpd\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:37 crc kubenswrapper[4751]: I0131 14:58:37.967878 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:37 crc kubenswrapper[4751]: I0131 14:58:37.967901 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6xgx9\" (UniqueName: \"kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-kube-api-access-6xgx9\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:37 crc kubenswrapper[4751]: I0131 14:58:37.967919 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ee66f9-5607-4559-9a64-6767dfbcc078-config-data\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:38 crc kubenswrapper[4751]: I0131 14:58:38.068673 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ee66f9-5607-4559-9a64-6767dfbcc078-log-httpd\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:38 crc kubenswrapper[4751]: I0131 14:58:38.068964 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:38 crc kubenswrapper[4751]: I0131 14:58:38.068986 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6xgx9\" (UniqueName: \"kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-kube-api-access-6xgx9\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:38 crc kubenswrapper[4751]: I0131 14:58:38.069006 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ee66f9-5607-4559-9a64-6767dfbcc078-config-data\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:38 crc kubenswrapper[4751]: I0131 14:58:38.069091 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ee66f9-5607-4559-9a64-6767dfbcc078-run-httpd\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:38 crc kubenswrapper[4751]: I0131 14:58:38.069206 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ee66f9-5607-4559-9a64-6767dfbcc078-log-httpd\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:38 crc kubenswrapper[4751]: E0131 14:58:38.069276 4751 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 14:58:38 crc kubenswrapper[4751]: E0131 14:58:38.069303 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-6d699db77c-58vrl: configmap "swift-ring-files" not found Jan 31 14:58:38 crc kubenswrapper[4751]: E0131 14:58:38.069363 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift podName:26ee66f9-5607-4559-9a64-6767dfbcc078 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:38.569345357 +0000 UTC m=+1022.944058242 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift") pod "swift-proxy-6d699db77c-58vrl" (UID: "26ee66f9-5607-4559-9a64-6767dfbcc078") : configmap "swift-ring-files" not found Jan 31 14:58:38 crc kubenswrapper[4751]: I0131 14:58:38.069611 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ee66f9-5607-4559-9a64-6767dfbcc078-run-httpd\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:38 crc kubenswrapper[4751]: I0131 14:58:38.077645 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ee66f9-5607-4559-9a64-6767dfbcc078-config-data\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:38 crc kubenswrapper[4751]: I0131 14:58:38.088374 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6xgx9\" (UniqueName: \"kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-kube-api-access-6xgx9\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:38 crc kubenswrapper[4751]: I0131 14:58:38.574910 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:38 crc kubenswrapper[4751]: E0131 14:58:38.575134 4751 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 14:58:38 crc kubenswrapper[4751]: E0131 14:58:38.575165 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-6d699db77c-58vrl: configmap "swift-ring-files" not found Jan 31 14:58:38 crc kubenswrapper[4751]: E0131 14:58:38.575222 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift podName:26ee66f9-5607-4559-9a64-6767dfbcc078 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:39.575204869 +0000 UTC m=+1023.949917754 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift") pod "swift-proxy-6d699db77c-58vrl" (UID: "26ee66f9-5607-4559-9a64-6767dfbcc078") : configmap "swift-ring-files" not found Jan 31 14:58:38 crc kubenswrapper[4751]: I0131 14:58:38.897186 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:58:38 crc kubenswrapper[4751]: I0131 14:58:38.897278 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:58:39 crc kubenswrapper[4751]: I0131 14:58:39.590313 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:39 crc kubenswrapper[4751]: E0131 14:58:39.590542 4751 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 14:58:39 crc kubenswrapper[4751]: E0131 14:58:39.590957 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-6d699db77c-58vrl: configmap "swift-ring-files" not found Jan 31 14:58:39 crc kubenswrapper[4751]: E0131 14:58:39.591057 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift podName:26ee66f9-5607-4559-9a64-6767dfbcc078 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:41.59102945 +0000 UTC m=+1025.965742345 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift") pod "swift-proxy-6d699db77c-58vrl" (UID: "26ee66f9-5607-4559-9a64-6767dfbcc078") : configmap "swift-ring-files" not found Jan 31 14:58:40 crc kubenswrapper[4751]: I0131 14:58:40.961236 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp"] Jan 31 14:58:40 crc kubenswrapper[4751]: I0131 14:58:40.962490 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" Jan 31 14:58:40 crc kubenswrapper[4751]: I0131 14:58:40.965165 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"default-dockercfg-wxkjx" Jan 31 14:58:40 crc kubenswrapper[4751]: I0131 14:58:40.975411 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp"] Jan 31 14:58:41 crc kubenswrapper[4751]: I0131 14:58:41.112764 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rzcg8\" (UniqueName: \"kubernetes.io/projected/585f0c4b-3594-4683-bb38-d1fcbbee12cd-kube-api-access-rzcg8\") pod \"9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp\" (UID: \"585f0c4b-3594-4683-bb38-d1fcbbee12cd\") " pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" Jan 31 14:58:41 crc kubenswrapper[4751]: I0131 14:58:41.113143 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585f0c4b-3594-4683-bb38-d1fcbbee12cd-util\") pod \"9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp\" (UID: \"585f0c4b-3594-4683-bb38-d1fcbbee12cd\") " pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" Jan 31 14:58:41 crc kubenswrapper[4751]: I0131 14:58:41.113428 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585f0c4b-3594-4683-bb38-d1fcbbee12cd-bundle\") pod \"9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp\" (UID: \"585f0c4b-3594-4683-bb38-d1fcbbee12cd\") " pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" Jan 31 14:58:41 crc kubenswrapper[4751]: I0131 14:58:41.215161 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585f0c4b-3594-4683-bb38-d1fcbbee12cd-util\") pod \"9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp\" (UID: \"585f0c4b-3594-4683-bb38-d1fcbbee12cd\") " pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" Jan 31 14:58:41 crc kubenswrapper[4751]: I0131 14:58:41.215660 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585f0c4b-3594-4683-bb38-d1fcbbee12cd-util\") pod \"9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp\" (UID: \"585f0c4b-3594-4683-bb38-d1fcbbee12cd\") " pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" Jan 31 14:58:41 crc kubenswrapper[4751]: I0131 14:58:41.215999 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585f0c4b-3594-4683-bb38-d1fcbbee12cd-bundle\") pod \"9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp\" (UID: \"585f0c4b-3594-4683-bb38-d1fcbbee12cd\") " pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" Jan 31 14:58:41 crc kubenswrapper[4751]: I0131 14:58:41.216283 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rzcg8\" (UniqueName: \"kubernetes.io/projected/585f0c4b-3594-4683-bb38-d1fcbbee12cd-kube-api-access-rzcg8\") pod \"9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp\" (UID: \"585f0c4b-3594-4683-bb38-d1fcbbee12cd\") " pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" Jan 31 14:58:41 crc kubenswrapper[4751]: I0131 14:58:41.216570 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585f0c4b-3594-4683-bb38-d1fcbbee12cd-bundle\") pod \"9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp\" (UID: \"585f0c4b-3594-4683-bb38-d1fcbbee12cd\") " pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" Jan 31 14:58:41 crc kubenswrapper[4751]: I0131 14:58:41.240372 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rzcg8\" (UniqueName: \"kubernetes.io/projected/585f0c4b-3594-4683-bb38-d1fcbbee12cd-kube-api-access-rzcg8\") pod \"9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp\" (UID: \"585f0c4b-3594-4683-bb38-d1fcbbee12cd\") " pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" Jan 31 14:58:41 crc kubenswrapper[4751]: I0131 14:58:41.280872 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" Jan 31 14:58:41 crc kubenswrapper[4751]: I0131 14:58:41.621923 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:41 crc kubenswrapper[4751]: E0131 14:58:41.622132 4751 projected.go:288] Couldn't get configMap glance-kuttl-tests/swift-ring-files: configmap "swift-ring-files" not found Jan 31 14:58:41 crc kubenswrapper[4751]: E0131 14:58:41.622149 4751 projected.go:194] Error preparing data for projected volume etc-swift for pod glance-kuttl-tests/swift-proxy-6d699db77c-58vrl: configmap "swift-ring-files" not found Jan 31 14:58:41 crc kubenswrapper[4751]: E0131 14:58:41.622201 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift podName:26ee66f9-5607-4559-9a64-6767dfbcc078 nodeName:}" failed. No retries permitted until 2026-01-31 14:58:45.622182879 +0000 UTC m=+1029.996895774 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "etc-swift" (UniqueName: "kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift") pod "swift-proxy-6d699db77c-58vrl" (UID: "26ee66f9-5607-4559-9a64-6767dfbcc078") : configmap "swift-ring-files" not found Jan 31 14:58:41 crc kubenswrapper[4751]: I0131 14:58:41.756122 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp"] Jan 31 14:58:42 crc kubenswrapper[4751]: I0131 14:58:42.277569 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" event={"ID":"585f0c4b-3594-4683-bb38-d1fcbbee12cd","Type":"ContainerStarted","Data":"fe746551f3718a5230142c952bda79cd40176a8ea52e7c55612bd125b16c09e1"} Jan 31 14:58:42 crc kubenswrapper[4751]: I0131 14:58:42.279992 4751 generic.go:334] "Generic (PLEG): container finished" podID="606aa4a9-2afe-4f51-a562-90f716040b58" containerID="d7982a0dd9c095e8b3eb11a8ff02587d379ddd34791962ab93f48b60a33bec98" exitCode=0 Jan 31 14:58:42 crc kubenswrapper[4751]: I0131 14:58:42.280038 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" event={"ID":"606aa4a9-2afe-4f51-a562-90f716040b58","Type":"ContainerDied","Data":"d7982a0dd9c095e8b3eb11a8ff02587d379ddd34791962ab93f48b60a33bec98"} Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.686888 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.856711 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-84kdw\" (UniqueName: \"kubernetes.io/projected/606aa4a9-2afe-4f51-a562-90f716040b58-kube-api-access-84kdw\") pod \"606aa4a9-2afe-4f51-a562-90f716040b58\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.856807 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/606aa4a9-2afe-4f51-a562-90f716040b58-swiftconf\") pod \"606aa4a9-2afe-4f51-a562-90f716040b58\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.856883 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/606aa4a9-2afe-4f51-a562-90f716040b58-etc-swift\") pod \"606aa4a9-2afe-4f51-a562-90f716040b58\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.856915 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/606aa4a9-2afe-4f51-a562-90f716040b58-scripts\") pod \"606aa4a9-2afe-4f51-a562-90f716040b58\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.856990 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/606aa4a9-2afe-4f51-a562-90f716040b58-ring-data-devices\") pod \"606aa4a9-2afe-4f51-a562-90f716040b58\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.857058 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/606aa4a9-2afe-4f51-a562-90f716040b58-dispersionconf\") pod \"606aa4a9-2afe-4f51-a562-90f716040b58\" (UID: \"606aa4a9-2afe-4f51-a562-90f716040b58\") " Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.857819 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/606aa4a9-2afe-4f51-a562-90f716040b58-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "606aa4a9-2afe-4f51-a562-90f716040b58" (UID: "606aa4a9-2afe-4f51-a562-90f716040b58"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.858305 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/606aa4a9-2afe-4f51-a562-90f716040b58-ring-data-devices" (OuterVolumeSpecName: "ring-data-devices") pod "606aa4a9-2afe-4f51-a562-90f716040b58" (UID: "606aa4a9-2afe-4f51-a562-90f716040b58"). InnerVolumeSpecName "ring-data-devices". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.862472 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/606aa4a9-2afe-4f51-a562-90f716040b58-kube-api-access-84kdw" (OuterVolumeSpecName: "kube-api-access-84kdw") pod "606aa4a9-2afe-4f51-a562-90f716040b58" (UID: "606aa4a9-2afe-4f51-a562-90f716040b58"). InnerVolumeSpecName "kube-api-access-84kdw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.867233 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/606aa4a9-2afe-4f51-a562-90f716040b58-dispersionconf" (OuterVolumeSpecName: "dispersionconf") pod "606aa4a9-2afe-4f51-a562-90f716040b58" (UID: "606aa4a9-2afe-4f51-a562-90f716040b58"). InnerVolumeSpecName "dispersionconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.880221 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/606aa4a9-2afe-4f51-a562-90f716040b58-swiftconf" (OuterVolumeSpecName: "swiftconf") pod "606aa4a9-2afe-4f51-a562-90f716040b58" (UID: "606aa4a9-2afe-4f51-a562-90f716040b58"). InnerVolumeSpecName "swiftconf". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.890920 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/606aa4a9-2afe-4f51-a562-90f716040b58-scripts" (OuterVolumeSpecName: "scripts") pod "606aa4a9-2afe-4f51-a562-90f716040b58" (UID: "606aa4a9-2afe-4f51-a562-90f716040b58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.958555 4751 reconciler_common.go:293] "Volume detached for volume \"ring-data-devices\" (UniqueName: \"kubernetes.io/configmap/606aa4a9-2afe-4f51-a562-90f716040b58-ring-data-devices\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.958589 4751 reconciler_common.go:293] "Volume detached for volume \"dispersionconf\" (UniqueName: \"kubernetes.io/secret/606aa4a9-2afe-4f51-a562-90f716040b58-dispersionconf\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.958600 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-84kdw\" (UniqueName: \"kubernetes.io/projected/606aa4a9-2afe-4f51-a562-90f716040b58-kube-api-access-84kdw\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.958613 4751 reconciler_common.go:293] "Volume detached for volume \"swiftconf\" (UniqueName: \"kubernetes.io/secret/606aa4a9-2afe-4f51-a562-90f716040b58-swiftconf\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.958624 4751 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/empty-dir/606aa4a9-2afe-4f51-a562-90f716040b58-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:43 crc kubenswrapper[4751]: I0131 14:58:43.958635 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/configmap/606aa4a9-2afe-4f51-a562-90f716040b58-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:44 crc kubenswrapper[4751]: I0131 14:58:44.298103 4751 generic.go:334] "Generic (PLEG): container finished" podID="585f0c4b-3594-4683-bb38-d1fcbbee12cd" containerID="3bbca91afaf0c02d15eadfd14c9b7b21724ed7ad9f88766a7c7a0c41fcf118a3" exitCode=0 Jan 31 14:58:44 crc kubenswrapper[4751]: I0131 14:58:44.298177 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" event={"ID":"585f0c4b-3594-4683-bb38-d1fcbbee12cd","Type":"ContainerDied","Data":"3bbca91afaf0c02d15eadfd14c9b7b21724ed7ad9f88766a7c7a0c41fcf118a3"} Jan 31 14:58:44 crc kubenswrapper[4751]: I0131 14:58:44.299662 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" event={"ID":"606aa4a9-2afe-4f51-a562-90f716040b58","Type":"ContainerDied","Data":"6d59aea6f88c0b1dfe68e8fb50352f164b60343fcb2577fd3636718b016322c8"} Jan 31 14:58:44 crc kubenswrapper[4751]: I0131 14:58:44.299687 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-ring-rebalance-z72xp" Jan 31 14:58:44 crc kubenswrapper[4751]: I0131 14:58:44.299721 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6d59aea6f88c0b1dfe68e8fb50352f164b60343fcb2577fd3636718b016322c8" Jan 31 14:58:45 crc kubenswrapper[4751]: I0131 14:58:45.309869 4751 generic.go:334] "Generic (PLEG): container finished" podID="585f0c4b-3594-4683-bb38-d1fcbbee12cd" containerID="f8a8825d481236aeb9aa96c02aca48495f3689b5e59d7cbdc781d2a43a293e1d" exitCode=0 Jan 31 14:58:45 crc kubenswrapper[4751]: I0131 14:58:45.309906 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" event={"ID":"585f0c4b-3594-4683-bb38-d1fcbbee12cd","Type":"ContainerDied","Data":"f8a8825d481236aeb9aa96c02aca48495f3689b5e59d7cbdc781d2a43a293e1d"} Jan 31 14:58:45 crc kubenswrapper[4751]: I0131 14:58:45.687735 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:45 crc kubenswrapper[4751]: I0131 14:58:45.696492 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift\") pod \"swift-proxy-6d699db77c-58vrl\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:45 crc kubenswrapper[4751]: I0131 14:58:45.705143 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:45 crc kubenswrapper[4751]: I0131 14:58:45.940222 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-proxy-6d699db77c-58vrl"] Jan 31 14:58:45 crc kubenswrapper[4751]: W0131 14:58:45.946271 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod26ee66f9_5607_4559_9a64_6767dfbcc078.slice/crio-85b271a6f57bacb15bd471b08b0e25366c5d1865f74c103bc014e71042620a53 WatchSource:0}: Error finding container 85b271a6f57bacb15bd471b08b0e25366c5d1865f74c103bc014e71042620a53: Status 404 returned error can't find the container with id 85b271a6f57bacb15bd471b08b0e25366c5d1865f74c103bc014e71042620a53 Jan 31 14:58:46 crc kubenswrapper[4751]: I0131 14:58:46.318251 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" event={"ID":"26ee66f9-5607-4559-9a64-6767dfbcc078","Type":"ContainerStarted","Data":"67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be"} Jan 31 14:58:46 crc kubenswrapper[4751]: I0131 14:58:46.318513 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" event={"ID":"26ee66f9-5607-4559-9a64-6767dfbcc078","Type":"ContainerStarted","Data":"7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321"} Jan 31 14:58:46 crc kubenswrapper[4751]: I0131 14:58:46.318523 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" event={"ID":"26ee66f9-5607-4559-9a64-6767dfbcc078","Type":"ContainerStarted","Data":"85b271a6f57bacb15bd471b08b0e25366c5d1865f74c103bc014e71042620a53"} Jan 31 14:58:46 crc kubenswrapper[4751]: I0131 14:58:46.318538 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:46 crc kubenswrapper[4751]: I0131 14:58:46.321214 4751 generic.go:334] "Generic (PLEG): container finished" podID="585f0c4b-3594-4683-bb38-d1fcbbee12cd" containerID="30a855aaf2538d16f15d520cfdce2fe3cf7008190e9478d912986cc8f0f389d2" exitCode=0 Jan 31 14:58:46 crc kubenswrapper[4751]: I0131 14:58:46.321316 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" event={"ID":"585f0c4b-3594-4683-bb38-d1fcbbee12cd","Type":"ContainerDied","Data":"30a855aaf2538d16f15d520cfdce2fe3cf7008190e9478d912986cc8f0f389d2"} Jan 31 14:58:46 crc kubenswrapper[4751]: I0131 14:58:46.361364 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" podStartSLOduration=9.36134065 podStartE2EDuration="9.36134065s" podCreationTimestamp="2026-01-31 14:58:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:58:46.340843441 +0000 UTC m=+1030.715556336" watchObservedRunningTime="2026-01-31 14:58:46.36134065 +0000 UTC m=+1030.736053535" Jan 31 14:58:47 crc kubenswrapper[4751]: I0131 14:58:47.330048 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:47 crc kubenswrapper[4751]: I0131 14:58:47.595735 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" Jan 31 14:58:47 crc kubenswrapper[4751]: I0131 14:58:47.730264 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585f0c4b-3594-4683-bb38-d1fcbbee12cd-bundle\") pod \"585f0c4b-3594-4683-bb38-d1fcbbee12cd\" (UID: \"585f0c4b-3594-4683-bb38-d1fcbbee12cd\") " Jan 31 14:58:47 crc kubenswrapper[4751]: I0131 14:58:47.730407 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzcg8\" (UniqueName: \"kubernetes.io/projected/585f0c4b-3594-4683-bb38-d1fcbbee12cd-kube-api-access-rzcg8\") pod \"585f0c4b-3594-4683-bb38-d1fcbbee12cd\" (UID: \"585f0c4b-3594-4683-bb38-d1fcbbee12cd\") " Jan 31 14:58:47 crc kubenswrapper[4751]: I0131 14:58:47.730485 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585f0c4b-3594-4683-bb38-d1fcbbee12cd-util\") pod \"585f0c4b-3594-4683-bb38-d1fcbbee12cd\" (UID: \"585f0c4b-3594-4683-bb38-d1fcbbee12cd\") " Jan 31 14:58:47 crc kubenswrapper[4751]: I0131 14:58:47.732786 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585f0c4b-3594-4683-bb38-d1fcbbee12cd-bundle" (OuterVolumeSpecName: "bundle") pod "585f0c4b-3594-4683-bb38-d1fcbbee12cd" (UID: "585f0c4b-3594-4683-bb38-d1fcbbee12cd"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:47 crc kubenswrapper[4751]: I0131 14:58:47.737388 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/585f0c4b-3594-4683-bb38-d1fcbbee12cd-kube-api-access-rzcg8" (OuterVolumeSpecName: "kube-api-access-rzcg8") pod "585f0c4b-3594-4683-bb38-d1fcbbee12cd" (UID: "585f0c4b-3594-4683-bb38-d1fcbbee12cd"). InnerVolumeSpecName "kube-api-access-rzcg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:58:47 crc kubenswrapper[4751]: I0131 14:58:47.745041 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/585f0c4b-3594-4683-bb38-d1fcbbee12cd-util" (OuterVolumeSpecName: "util") pod "585f0c4b-3594-4683-bb38-d1fcbbee12cd" (UID: "585f0c4b-3594-4683-bb38-d1fcbbee12cd"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 14:58:47 crc kubenswrapper[4751]: I0131 14:58:47.832233 4751 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/585f0c4b-3594-4683-bb38-d1fcbbee12cd-util\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:47 crc kubenswrapper[4751]: I0131 14:58:47.832267 4751 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/585f0c4b-3594-4683-bb38-d1fcbbee12cd-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:47 crc kubenswrapper[4751]: I0131 14:58:47.832278 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzcg8\" (UniqueName: \"kubernetes.io/projected/585f0c4b-3594-4683-bb38-d1fcbbee12cd-kube-api-access-rzcg8\") on node \"crc\" DevicePath \"\"" Jan 31 14:58:48 crc kubenswrapper[4751]: I0131 14:58:48.340511 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" Jan 31 14:58:48 crc kubenswrapper[4751]: I0131 14:58:48.340522 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp" event={"ID":"585f0c4b-3594-4683-bb38-d1fcbbee12cd","Type":"ContainerDied","Data":"fe746551f3718a5230142c952bda79cd40176a8ea52e7c55612bd125b16c09e1"} Jan 31 14:58:48 crc kubenswrapper[4751]: I0131 14:58:48.340573 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fe746551f3718a5230142c952bda79cd40176a8ea52e7c55612bd125b16c09e1" Jan 31 14:58:52 crc kubenswrapper[4751]: I0131 14:58:52.907189 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:52 crc kubenswrapper[4751]: I0131 14:58:52.919674 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift\") pod \"swift-storage-0\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:53 crc kubenswrapper[4751]: I0131 14:58:53.032728 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Jan 31 14:58:53 crc kubenswrapper[4751]: I0131 14:58:53.522880 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Jan 31 14:58:54 crc kubenswrapper[4751]: I0131 14:58:54.382976 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"7955d37d9d1be24fa8d9a015aa2ea953036cee2a0334d1dbf39fdbe1dcef40e5"} Jan 31 14:58:55 crc kubenswrapper[4751]: I0131 14:58:55.392286 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"a4e14596c5c3a7af2ea9e82736c916fc73b8fcbf27a523b8fe47f9a8e69b1bc2"} Jan 31 14:58:55 crc kubenswrapper[4751]: I0131 14:58:55.392576 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"461a1aaa8bc72705195647c97b28e111484e900c69e9a4da07e510a6c451ed4c"} Jan 31 14:58:55 crc kubenswrapper[4751]: I0131 14:58:55.392589 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"d0ab6cd06ea2abbd171a5345dc579495df175d9d8a52b30a0139e24e65e43616"} Jan 31 14:58:55 crc kubenswrapper[4751]: I0131 14:58:55.392602 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"d93f0c8cc4f4e310c9d207351f924f281c14e44b511b3d4a8f51fed27dbeed8f"} Jan 31 14:58:55 crc kubenswrapper[4751]: I0131 14:58:55.707880 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:55 crc kubenswrapper[4751]: I0131 14:58:55.710816 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 14:58:57 crc kubenswrapper[4751]: I0131 14:58:57.451395 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"400722d3dac6cd5b0b727b3e599b127bb527981160049f2561a32e7ada14affd"} Jan 31 14:58:57 crc kubenswrapper[4751]: I0131 14:58:57.451792 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"34a87b0cfca857f6a2c07d4713531103b7df75f0fdc3e2be299ecaf554d5d9db"} Jan 31 14:58:57 crc kubenswrapper[4751]: I0131 14:58:57.451804 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"03c86cbbc819872662746f2a8384c7c50f07b481c42b5f3d39e0b1e87c7b0557"} Jan 31 14:58:57 crc kubenswrapper[4751]: I0131 14:58:57.451814 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"3b4375e902d16ea731761694aa85354dcfcda568f68f1d4210b06b07c701f380"} Jan 31 14:58:58 crc kubenswrapper[4751]: I0131 14:58:58.467940 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"1f74cf8c2ce97cd17f509447e4c986197d8af0e8b2f40e7c6a07653c81e66d3b"} Jan 31 14:58:59 crc kubenswrapper[4751]: I0131 14:58:59.480006 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"ae11b6c0a7f7893c0ba728593c9e1b6db0bc399ae9c55df1f1023d422fc9333c"} Jan 31 14:58:59 crc kubenswrapper[4751]: I0131 14:58:59.480309 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"519bd8155f30918b172e24832e84310378bd7ea10e796377a992dd3fe9e7276d"} Jan 31 14:58:59 crc kubenswrapper[4751]: I0131 14:58:59.480319 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"950232b5b660c70b9100e81003ff993443f745f40d7da6ba8dc037822059cb8e"} Jan 31 14:58:59 crc kubenswrapper[4751]: I0131 14:58:59.480329 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"71ca1416bdc095b268ec385a4ebcd269b729c80c3aee7f832db2892f4fe6e78a"} Jan 31 14:58:59 crc kubenswrapper[4751]: I0131 14:58:59.480340 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"1e2003fe4d2366b583ebedf393e2492c910be0ebf3f2652f5a15b1e8c78961df"} Jan 31 14:58:59 crc kubenswrapper[4751]: I0131 14:58:59.480351 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerStarted","Data":"03b25054db738f38056ec8af2822c9203e252f1a4f95be8c4ab8c1c34de3455c"} Jan 31 14:58:59 crc kubenswrapper[4751]: I0131 14:58:59.521767 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/swift-storage-0" podStartSLOduration=35.875552909 podStartE2EDuration="40.521746512s" podCreationTimestamp="2026-01-31 14:58:19 +0000 UTC" firstStartedPulling="2026-01-31 14:58:53.521585855 +0000 UTC m=+1037.896298740" lastFinishedPulling="2026-01-31 14:58:58.167779458 +0000 UTC m=+1042.542492343" observedRunningTime="2026-01-31 14:58:59.515816756 +0000 UTC m=+1043.890529651" watchObservedRunningTime="2026-01-31 14:58:59.521746512 +0000 UTC m=+1043.896459397" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.649616 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz"] Jan 31 14:59:06 crc kubenswrapper[4751]: E0131 14:59:06.650507 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="606aa4a9-2afe-4f51-a562-90f716040b58" containerName="swift-ring-rebalance" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.650523 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="606aa4a9-2afe-4f51-a562-90f716040b58" containerName="swift-ring-rebalance" Jan 31 14:59:06 crc kubenswrapper[4751]: E0131 14:59:06.650554 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585f0c4b-3594-4683-bb38-d1fcbbee12cd" containerName="extract" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.650563 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="585f0c4b-3594-4683-bb38-d1fcbbee12cd" containerName="extract" Jan 31 14:59:06 crc kubenswrapper[4751]: E0131 14:59:06.650578 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585f0c4b-3594-4683-bb38-d1fcbbee12cd" containerName="pull" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.650587 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="585f0c4b-3594-4683-bb38-d1fcbbee12cd" containerName="pull" Jan 31 14:59:06 crc kubenswrapper[4751]: E0131 14:59:06.650603 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="585f0c4b-3594-4683-bb38-d1fcbbee12cd" containerName="util" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.650612 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="585f0c4b-3594-4683-bb38-d1fcbbee12cd" containerName="util" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.650763 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="585f0c4b-3594-4683-bb38-d1fcbbee12cd" containerName="extract" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.650787 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="606aa4a9-2afe-4f51-a562-90f716040b58" containerName="swift-ring-rebalance" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.651360 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.653685 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-service-cert" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.655752 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openstack-operators"/"glance-operator-controller-manager-dockercfg-ph7z8" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.666937 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz"] Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.738583 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f70443db-a342-4f5d-81b2-39c01f494cf8-webhook-cert\") pod \"glance-operator-controller-manager-75dc47fc9-v4thz\" (UID: \"f70443db-a342-4f5d-81b2-39c01f494cf8\") " pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.738655 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f70443db-a342-4f5d-81b2-39c01f494cf8-apiservice-cert\") pod \"glance-operator-controller-manager-75dc47fc9-v4thz\" (UID: \"f70443db-a342-4f5d-81b2-39c01f494cf8\") " pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.738754 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l6f7g\" (UniqueName: \"kubernetes.io/projected/f70443db-a342-4f5d-81b2-39c01f494cf8-kube-api-access-l6f7g\") pod \"glance-operator-controller-manager-75dc47fc9-v4thz\" (UID: \"f70443db-a342-4f5d-81b2-39c01f494cf8\") " pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.839881 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f70443db-a342-4f5d-81b2-39c01f494cf8-webhook-cert\") pod \"glance-operator-controller-manager-75dc47fc9-v4thz\" (UID: \"f70443db-a342-4f5d-81b2-39c01f494cf8\") " pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.839942 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f70443db-a342-4f5d-81b2-39c01f494cf8-apiservice-cert\") pod \"glance-operator-controller-manager-75dc47fc9-v4thz\" (UID: \"f70443db-a342-4f5d-81b2-39c01f494cf8\") " pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.840051 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l6f7g\" (UniqueName: \"kubernetes.io/projected/f70443db-a342-4f5d-81b2-39c01f494cf8-kube-api-access-l6f7g\") pod \"glance-operator-controller-manager-75dc47fc9-v4thz\" (UID: \"f70443db-a342-4f5d-81b2-39c01f494cf8\") " pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.855251 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f70443db-a342-4f5d-81b2-39c01f494cf8-webhook-cert\") pod \"glance-operator-controller-manager-75dc47fc9-v4thz\" (UID: \"f70443db-a342-4f5d-81b2-39c01f494cf8\") " pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.855256 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f70443db-a342-4f5d-81b2-39c01f494cf8-apiservice-cert\") pod \"glance-operator-controller-manager-75dc47fc9-v4thz\" (UID: \"f70443db-a342-4f5d-81b2-39c01f494cf8\") " pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.861879 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l6f7g\" (UniqueName: \"kubernetes.io/projected/f70443db-a342-4f5d-81b2-39c01f494cf8-kube-api-access-l6f7g\") pod \"glance-operator-controller-manager-75dc47fc9-v4thz\" (UID: \"f70443db-a342-4f5d-81b2-39c01f494cf8\") " pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" Jan 31 14:59:06 crc kubenswrapper[4751]: I0131 14:59:06.976696 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" Jan 31 14:59:07 crc kubenswrapper[4751]: I0131 14:59:07.481286 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz"] Jan 31 14:59:07 crc kubenswrapper[4751]: W0131 14:59:07.488186 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf70443db_a342_4f5d_81b2_39c01f494cf8.slice/crio-1eff2ed52d31a6cb86d6cac75fe9fb2899624e91687b3dbe55c93d71e4cef517 WatchSource:0}: Error finding container 1eff2ed52d31a6cb86d6cac75fe9fb2899624e91687b3dbe55c93d71e4cef517: Status 404 returned error can't find the container with id 1eff2ed52d31a6cb86d6cac75fe9fb2899624e91687b3dbe55c93d71e4cef517 Jan 31 14:59:07 crc kubenswrapper[4751]: I0131 14:59:07.550451 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" event={"ID":"f70443db-a342-4f5d-81b2-39c01f494cf8","Type":"ContainerStarted","Data":"1eff2ed52d31a6cb86d6cac75fe9fb2899624e91687b3dbe55c93d71e4cef517"} Jan 31 14:59:08 crc kubenswrapper[4751]: I0131 14:59:08.896834 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 14:59:08 crc kubenswrapper[4751]: I0131 14:59:08.897226 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 14:59:08 crc kubenswrapper[4751]: I0131 14:59:08.897286 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 14:59:08 crc kubenswrapper[4751]: I0131 14:59:08.898005 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"dc064826cd8a78005216541d25736856cc2dd920bfe44778b79dbfd2f76ed341"} pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 14:59:08 crc kubenswrapper[4751]: I0131 14:59:08.898095 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" containerID="cri-o://dc064826cd8a78005216541d25736856cc2dd920bfe44778b79dbfd2f76ed341" gracePeriod=600 Jan 31 14:59:09 crc kubenswrapper[4751]: I0131 14:59:09.569304 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" event={"ID":"f70443db-a342-4f5d-81b2-39c01f494cf8","Type":"ContainerStarted","Data":"ab946ef56298d90f2da08c7aa03dc9761afb66c0a527a34685eef2375ecebd56"} Jan 31 14:59:09 crc kubenswrapper[4751]: I0131 14:59:09.569836 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" Jan 31 14:59:09 crc kubenswrapper[4751]: I0131 14:59:09.573128 4751 generic.go:334] "Generic (PLEG): container finished" podID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerID="dc064826cd8a78005216541d25736856cc2dd920bfe44778b79dbfd2f76ed341" exitCode=0 Jan 31 14:59:09 crc kubenswrapper[4751]: I0131 14:59:09.573181 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerDied","Data":"dc064826cd8a78005216541d25736856cc2dd920bfe44778b79dbfd2f76ed341"} Jan 31 14:59:09 crc kubenswrapper[4751]: I0131 14:59:09.573213 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerStarted","Data":"7fcf941f127d31d0e5c99d5ef038c633782d289d0e911f4e9c5c6f77b2a91e2a"} Jan 31 14:59:09 crc kubenswrapper[4751]: I0131 14:59:09.573235 4751 scope.go:117] "RemoveContainer" containerID="f4d4f92719c72ec0adb31e02a10d5c8bcb4b1a9b3bfb5b0e7ed8cfdbc4a53235" Jan 31 14:59:09 crc kubenswrapper[4751]: I0131 14:59:09.590449 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" podStartSLOduration=2.293730486 podStartE2EDuration="3.590428321s" podCreationTimestamp="2026-01-31 14:59:06 +0000 UTC" firstStartedPulling="2026-01-31 14:59:07.492224529 +0000 UTC m=+1051.866937404" lastFinishedPulling="2026-01-31 14:59:08.788922344 +0000 UTC m=+1053.163635239" observedRunningTime="2026-01-31 14:59:09.590232926 +0000 UTC m=+1053.964945811" watchObservedRunningTime="2026-01-31 14:59:09.590428321 +0000 UTC m=+1053.965141206" Jan 31 14:59:16 crc kubenswrapper[4751]: I0131 14:59:16.982735 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.501393 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.502686 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.505793 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"default-dockercfg-gh2c4" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.506376 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"openstack-config-secret" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.506641 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts-9db6gc427h" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.506900 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.509912 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.552366 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-bf79-account-create-update-whmk8"] Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.553587 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-bf79-account-create-update-whmk8" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.557357 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.561860 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-b8nfw"] Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.563088 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-b8nfw" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.575683 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-b8nfw"] Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.586463 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-bf79-account-create-update-whmk8"] Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.650734 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d360673b-7556-44b9-b7bd-4805810da349-openstack-config\") pod \"openstackclient\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " pod="glance-kuttl-tests/openstackclient" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.650784 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d360673b-7556-44b9-b7bd-4805810da349-openstack-config-secret\") pod \"openstackclient\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " pod="glance-kuttl-tests/openstackclient" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.650816 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/892f8632-f7d8-46b0-a39a-4a84f5e3a2aa-operator-scripts\") pod \"glance-db-create-b8nfw\" (UID: \"892f8632-f7d8-46b0-a39a-4a84f5e3a2aa\") " pod="glance-kuttl-tests/glance-db-create-b8nfw" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.650836 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bs77g\" (UniqueName: \"kubernetes.io/projected/892f8632-f7d8-46b0-a39a-4a84f5e3a2aa-kube-api-access-bs77g\") pod \"glance-db-create-b8nfw\" (UID: \"892f8632-f7d8-46b0-a39a-4a84f5e3a2aa\") " pod="glance-kuttl-tests/glance-db-create-b8nfw" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.650865 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqbgs\" (UniqueName: \"kubernetes.io/projected/4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa-kube-api-access-cqbgs\") pod \"glance-bf79-account-create-update-whmk8\" (UID: \"4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa\") " pod="glance-kuttl-tests/glance-bf79-account-create-update-whmk8" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.651034 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8ch7w\" (UniqueName: \"kubernetes.io/projected/d360673b-7556-44b9-b7bd-4805810da349-kube-api-access-8ch7w\") pod \"openstackclient\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " pod="glance-kuttl-tests/openstackclient" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.651114 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/d360673b-7556-44b9-b7bd-4805810da349-openstack-scripts\") pod \"openstackclient\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " pod="glance-kuttl-tests/openstackclient" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.651170 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa-operator-scripts\") pod \"glance-bf79-account-create-update-whmk8\" (UID: \"4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa\") " pod="glance-kuttl-tests/glance-bf79-account-create-update-whmk8" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.752444 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/d360673b-7556-44b9-b7bd-4805810da349-openstack-scripts\") pod \"openstackclient\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " pod="glance-kuttl-tests/openstackclient" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.752534 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa-operator-scripts\") pod \"glance-bf79-account-create-update-whmk8\" (UID: \"4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa\") " pod="glance-kuttl-tests/glance-bf79-account-create-update-whmk8" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.752646 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d360673b-7556-44b9-b7bd-4805810da349-openstack-config\") pod \"openstackclient\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " pod="glance-kuttl-tests/openstackclient" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.752690 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d360673b-7556-44b9-b7bd-4805810da349-openstack-config-secret\") pod \"openstackclient\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " pod="glance-kuttl-tests/openstackclient" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.752735 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/892f8632-f7d8-46b0-a39a-4a84f5e3a2aa-operator-scripts\") pod \"glance-db-create-b8nfw\" (UID: \"892f8632-f7d8-46b0-a39a-4a84f5e3a2aa\") " pod="glance-kuttl-tests/glance-db-create-b8nfw" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.752762 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bs77g\" (UniqueName: \"kubernetes.io/projected/892f8632-f7d8-46b0-a39a-4a84f5e3a2aa-kube-api-access-bs77g\") pod \"glance-db-create-b8nfw\" (UID: \"892f8632-f7d8-46b0-a39a-4a84f5e3a2aa\") " pod="glance-kuttl-tests/glance-db-create-b8nfw" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.752804 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqbgs\" (UniqueName: \"kubernetes.io/projected/4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa-kube-api-access-cqbgs\") pod \"glance-bf79-account-create-update-whmk8\" (UID: \"4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa\") " pod="glance-kuttl-tests/glance-bf79-account-create-update-whmk8" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.752863 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8ch7w\" (UniqueName: \"kubernetes.io/projected/d360673b-7556-44b9-b7bd-4805810da349-kube-api-access-8ch7w\") pod \"openstackclient\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " pod="glance-kuttl-tests/openstackclient" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.753535 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d360673b-7556-44b9-b7bd-4805810da349-openstack-config\") pod \"openstackclient\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " pod="glance-kuttl-tests/openstackclient" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.753698 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/892f8632-f7d8-46b0-a39a-4a84f5e3a2aa-operator-scripts\") pod \"glance-db-create-b8nfw\" (UID: \"892f8632-f7d8-46b0-a39a-4a84f5e3a2aa\") " pod="glance-kuttl-tests/glance-db-create-b8nfw" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.753749 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa-operator-scripts\") pod \"glance-bf79-account-create-update-whmk8\" (UID: \"4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa\") " pod="glance-kuttl-tests/glance-bf79-account-create-update-whmk8" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.754027 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/d360673b-7556-44b9-b7bd-4805810da349-openstack-scripts\") pod \"openstackclient\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " pod="glance-kuttl-tests/openstackclient" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.759988 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d360673b-7556-44b9-b7bd-4805810da349-openstack-config-secret\") pod \"openstackclient\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " pod="glance-kuttl-tests/openstackclient" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.769908 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8ch7w\" (UniqueName: \"kubernetes.io/projected/d360673b-7556-44b9-b7bd-4805810da349-kube-api-access-8ch7w\") pod \"openstackclient\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " pod="glance-kuttl-tests/openstackclient" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.771439 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqbgs\" (UniqueName: \"kubernetes.io/projected/4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa-kube-api-access-cqbgs\") pod \"glance-bf79-account-create-update-whmk8\" (UID: \"4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa\") " pod="glance-kuttl-tests/glance-bf79-account-create-update-whmk8" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.771855 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bs77g\" (UniqueName: \"kubernetes.io/projected/892f8632-f7d8-46b0-a39a-4a84f5e3a2aa-kube-api-access-bs77g\") pod \"glance-db-create-b8nfw\" (UID: \"892f8632-f7d8-46b0-a39a-4a84f5e3a2aa\") " pod="glance-kuttl-tests/glance-db-create-b8nfw" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.828370 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.874960 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-bf79-account-create-update-whmk8" Jan 31 14:59:20 crc kubenswrapper[4751]: I0131 14:59:20.887965 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-b8nfw" Jan 31 14:59:21 crc kubenswrapper[4751]: I0131 14:59:21.232254 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 14:59:21 crc kubenswrapper[4751]: I0131 14:59:21.357241 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-bf79-account-create-update-whmk8"] Jan 31 14:59:21 crc kubenswrapper[4751]: W0131 14:59:21.365441 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4c9ad1c0_9bb7_4d3e_8e68_8310292d89fa.slice/crio-546a3a33848e346d24fbc975e927aab08414e617a2ceb91ba8f794b0d2405aee WatchSource:0}: Error finding container 546a3a33848e346d24fbc975e927aab08414e617a2ceb91ba8f794b0d2405aee: Status 404 returned error can't find the container with id 546a3a33848e346d24fbc975e927aab08414e617a2ceb91ba8f794b0d2405aee Jan 31 14:59:21 crc kubenswrapper[4751]: I0131 14:59:21.386379 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-b8nfw"] Jan 31 14:59:21 crc kubenswrapper[4751]: W0131 14:59:21.391035 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod892f8632_f7d8_46b0_a39a_4a84f5e3a2aa.slice/crio-67b5a0239378d9b43272b8e5a4f0eb11ba98f694abae6196e55befbe976cb98c WatchSource:0}: Error finding container 67b5a0239378d9b43272b8e5a4f0eb11ba98f694abae6196e55befbe976cb98c: Status 404 returned error can't find the container with id 67b5a0239378d9b43272b8e5a4f0eb11ba98f694abae6196e55befbe976cb98c Jan 31 14:59:21 crc kubenswrapper[4751]: I0131 14:59:21.677259 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-bf79-account-create-update-whmk8" event={"ID":"4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa","Type":"ContainerStarted","Data":"4157453b55a598117cf21c7e58fec8625fe386b3472f188272985dac7429ad14"} Jan 31 14:59:21 crc kubenswrapper[4751]: I0131 14:59:21.677600 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-bf79-account-create-update-whmk8" event={"ID":"4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa","Type":"ContainerStarted","Data":"546a3a33848e346d24fbc975e927aab08414e617a2ceb91ba8f794b0d2405aee"} Jan 31 14:59:21 crc kubenswrapper[4751]: I0131 14:59:21.680285 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-b8nfw" event={"ID":"892f8632-f7d8-46b0-a39a-4a84f5e3a2aa","Type":"ContainerStarted","Data":"53e4421364bd50f8121a14bb4c3e20cbc7c5ba08c19bdb68ee47f37ac2b94308"} Jan 31 14:59:21 crc kubenswrapper[4751]: I0131 14:59:21.680319 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-b8nfw" event={"ID":"892f8632-f7d8-46b0-a39a-4a84f5e3a2aa","Type":"ContainerStarted","Data":"67b5a0239378d9b43272b8e5a4f0eb11ba98f694abae6196e55befbe976cb98c"} Jan 31 14:59:21 crc kubenswrapper[4751]: I0131 14:59:21.684342 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"d360673b-7556-44b9-b7bd-4805810da349","Type":"ContainerStarted","Data":"c2e3697d65b3597868569dcd055006b2a37c4a1b745665e4039d129867477c4a"} Jan 31 14:59:21 crc kubenswrapper[4751]: I0131 14:59:21.713366 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-bf79-account-create-update-whmk8" podStartSLOduration=1.713347022 podStartE2EDuration="1.713347022s" podCreationTimestamp="2026-01-31 14:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:59:21.695301206 +0000 UTC m=+1066.070014091" watchObservedRunningTime="2026-01-31 14:59:21.713347022 +0000 UTC m=+1066.088059907" Jan 31 14:59:21 crc kubenswrapper[4751]: I0131 14:59:21.715185 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-create-b8nfw" podStartSLOduration=1.71517907 podStartE2EDuration="1.71517907s" podCreationTimestamp="2026-01-31 14:59:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 14:59:21.711086082 +0000 UTC m=+1066.085798967" watchObservedRunningTime="2026-01-31 14:59:21.71517907 +0000 UTC m=+1066.089891955" Jan 31 14:59:22 crc kubenswrapper[4751]: I0131 14:59:22.695432 4751 generic.go:334] "Generic (PLEG): container finished" podID="892f8632-f7d8-46b0-a39a-4a84f5e3a2aa" containerID="53e4421364bd50f8121a14bb4c3e20cbc7c5ba08c19bdb68ee47f37ac2b94308" exitCode=0 Jan 31 14:59:22 crc kubenswrapper[4751]: I0131 14:59:22.695702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-b8nfw" event={"ID":"892f8632-f7d8-46b0-a39a-4a84f5e3a2aa","Type":"ContainerDied","Data":"53e4421364bd50f8121a14bb4c3e20cbc7c5ba08c19bdb68ee47f37ac2b94308"} Jan 31 14:59:22 crc kubenswrapper[4751]: I0131 14:59:22.699412 4751 generic.go:334] "Generic (PLEG): container finished" podID="4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa" containerID="4157453b55a598117cf21c7e58fec8625fe386b3472f188272985dac7429ad14" exitCode=0 Jan 31 14:59:22 crc kubenswrapper[4751]: I0131 14:59:22.699441 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-bf79-account-create-update-whmk8" event={"ID":"4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa","Type":"ContainerDied","Data":"4157453b55a598117cf21c7e58fec8625fe386b3472f188272985dac7429ad14"} Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.150737 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-bf79-account-create-update-whmk8" Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.158694 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-b8nfw" Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.212522 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa-operator-scripts\") pod \"4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa\" (UID: \"4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa\") " Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.212706 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqbgs\" (UniqueName: \"kubernetes.io/projected/4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa-kube-api-access-cqbgs\") pod \"4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa\" (UID: \"4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa\") " Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.213416 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa" (UID: "4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.213796 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.217956 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa-kube-api-access-cqbgs" (OuterVolumeSpecName: "kube-api-access-cqbgs") pod "4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa" (UID: "4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa"). InnerVolumeSpecName "kube-api-access-cqbgs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.315015 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/892f8632-f7d8-46b0-a39a-4a84f5e3a2aa-operator-scripts\") pod \"892f8632-f7d8-46b0-a39a-4a84f5e3a2aa\" (UID: \"892f8632-f7d8-46b0-a39a-4a84f5e3a2aa\") " Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.315328 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bs77g\" (UniqueName: \"kubernetes.io/projected/892f8632-f7d8-46b0-a39a-4a84f5e3a2aa-kube-api-access-bs77g\") pod \"892f8632-f7d8-46b0-a39a-4a84f5e3a2aa\" (UID: \"892f8632-f7d8-46b0-a39a-4a84f5e3a2aa\") " Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.315546 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/892f8632-f7d8-46b0-a39a-4a84f5e3a2aa-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "892f8632-f7d8-46b0-a39a-4a84f5e3a2aa" (UID: "892f8632-f7d8-46b0-a39a-4a84f5e3a2aa"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.315603 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cqbgs\" (UniqueName: \"kubernetes.io/projected/4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa-kube-api-access-cqbgs\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.317834 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/892f8632-f7d8-46b0-a39a-4a84f5e3a2aa-kube-api-access-bs77g" (OuterVolumeSpecName: "kube-api-access-bs77g") pod "892f8632-f7d8-46b0-a39a-4a84f5e3a2aa" (UID: "892f8632-f7d8-46b0-a39a-4a84f5e3a2aa"). InnerVolumeSpecName "kube-api-access-bs77g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.416675 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bs77g\" (UniqueName: \"kubernetes.io/projected/892f8632-f7d8-46b0-a39a-4a84f5e3a2aa-kube-api-access-bs77g\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.416709 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/892f8632-f7d8-46b0-a39a-4a84f5e3a2aa-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.726784 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-b8nfw" event={"ID":"892f8632-f7d8-46b0-a39a-4a84f5e3a2aa","Type":"ContainerDied","Data":"67b5a0239378d9b43272b8e5a4f0eb11ba98f694abae6196e55befbe976cb98c"} Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.726840 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="67b5a0239378d9b43272b8e5a4f0eb11ba98f694abae6196e55befbe976cb98c" Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.727983 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-b8nfw" Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.729173 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-bf79-account-create-update-whmk8" event={"ID":"4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa","Type":"ContainerDied","Data":"546a3a33848e346d24fbc975e927aab08414e617a2ceb91ba8f794b0d2405aee"} Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.729218 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="546a3a33848e346d24fbc975e927aab08414e617a2ceb91ba8f794b0d2405aee" Jan 31 14:59:24 crc kubenswrapper[4751]: I0131 14:59:24.729278 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-bf79-account-create-update-whmk8" Jan 31 14:59:25 crc kubenswrapper[4751]: I0131 14:59:25.746090 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-lvpgz"] Jan 31 14:59:25 crc kubenswrapper[4751]: E0131 14:59:25.746613 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="892f8632-f7d8-46b0-a39a-4a84f5e3a2aa" containerName="mariadb-database-create" Jan 31 14:59:25 crc kubenswrapper[4751]: I0131 14:59:25.746627 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="892f8632-f7d8-46b0-a39a-4a84f5e3a2aa" containerName="mariadb-database-create" Jan 31 14:59:25 crc kubenswrapper[4751]: E0131 14:59:25.746644 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa" containerName="mariadb-account-create-update" Jan 31 14:59:25 crc kubenswrapper[4751]: I0131 14:59:25.746650 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa" containerName="mariadb-account-create-update" Jan 31 14:59:25 crc kubenswrapper[4751]: I0131 14:59:25.746778 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa" containerName="mariadb-account-create-update" Jan 31 14:59:25 crc kubenswrapper[4751]: I0131 14:59:25.746794 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="892f8632-f7d8-46b0-a39a-4a84f5e3a2aa" containerName="mariadb-database-create" Jan 31 14:59:25 crc kubenswrapper[4751]: I0131 14:59:25.747581 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-lvpgz" Jan 31 14:59:25 crc kubenswrapper[4751]: W0131 14:59:25.751358 4751 reflector.go:561] object-"glance-kuttl-tests"/"glance-glance-dockercfg-fgjx2": failed to list *v1.Secret: secrets "glance-glance-dockercfg-fgjx2" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "glance-kuttl-tests": no relationship found between node 'crc' and this object Jan 31 14:59:25 crc kubenswrapper[4751]: W0131 14:59:25.751373 4751 reflector.go:561] object-"glance-kuttl-tests"/"glance-config-data": failed to list *v1.Secret: secrets "glance-config-data" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "glance-kuttl-tests": no relationship found between node 'crc' and this object Jan 31 14:59:25 crc kubenswrapper[4751]: E0131 14:59:25.751407 4751 reflector.go:158] "Unhandled Error" err="object-\"glance-kuttl-tests\"/\"glance-glance-dockercfg-fgjx2\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"glance-glance-dockercfg-fgjx2\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"glance-kuttl-tests\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 14:59:25 crc kubenswrapper[4751]: E0131 14:59:25.751426 4751 reflector.go:158] "Unhandled Error" err="object-\"glance-kuttl-tests\"/\"glance-config-data\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"glance-config-data\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"glance-kuttl-tests\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 31 14:59:25 crc kubenswrapper[4751]: I0131 14:59:25.761400 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-lvpgz"] Jan 31 14:59:25 crc kubenswrapper[4751]: I0131 14:59:25.849514 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bvkld\" (UniqueName: \"kubernetes.io/projected/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-kube-api-access-bvkld\") pod \"glance-db-sync-lvpgz\" (UID: \"5c7b87c6-2803-4ae5-9257-1a7e12d26f61\") " pod="glance-kuttl-tests/glance-db-sync-lvpgz" Jan 31 14:59:25 crc kubenswrapper[4751]: I0131 14:59:25.849620 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-config-data\") pod \"glance-db-sync-lvpgz\" (UID: \"5c7b87c6-2803-4ae5-9257-1a7e12d26f61\") " pod="glance-kuttl-tests/glance-db-sync-lvpgz" Jan 31 14:59:25 crc kubenswrapper[4751]: I0131 14:59:25.849719 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-db-sync-config-data\") pod \"glance-db-sync-lvpgz\" (UID: \"5c7b87c6-2803-4ae5-9257-1a7e12d26f61\") " pod="glance-kuttl-tests/glance-db-sync-lvpgz" Jan 31 14:59:25 crc kubenswrapper[4751]: I0131 14:59:25.951590 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-db-sync-config-data\") pod \"glance-db-sync-lvpgz\" (UID: \"5c7b87c6-2803-4ae5-9257-1a7e12d26f61\") " pod="glance-kuttl-tests/glance-db-sync-lvpgz" Jan 31 14:59:25 crc kubenswrapper[4751]: I0131 14:59:25.951730 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bvkld\" (UniqueName: \"kubernetes.io/projected/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-kube-api-access-bvkld\") pod \"glance-db-sync-lvpgz\" (UID: \"5c7b87c6-2803-4ae5-9257-1a7e12d26f61\") " pod="glance-kuttl-tests/glance-db-sync-lvpgz" Jan 31 14:59:25 crc kubenswrapper[4751]: I0131 14:59:25.952158 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-config-data\") pod \"glance-db-sync-lvpgz\" (UID: \"5c7b87c6-2803-4ae5-9257-1a7e12d26f61\") " pod="glance-kuttl-tests/glance-db-sync-lvpgz" Jan 31 14:59:25 crc kubenswrapper[4751]: I0131 14:59:25.971967 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bvkld\" (UniqueName: \"kubernetes.io/projected/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-kube-api-access-bvkld\") pod \"glance-db-sync-lvpgz\" (UID: \"5c7b87c6-2803-4ae5-9257-1a7e12d26f61\") " pod="glance-kuttl-tests/glance-db-sync-lvpgz" Jan 31 14:59:26 crc kubenswrapper[4751]: I0131 14:59:26.782869 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Jan 31 14:59:26 crc kubenswrapper[4751]: I0131 14:59:26.787558 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-db-sync-config-data\") pod \"glance-db-sync-lvpgz\" (UID: \"5c7b87c6-2803-4ae5-9257-1a7e12d26f61\") " pod="glance-kuttl-tests/glance-db-sync-lvpgz" Jan 31 14:59:26 crc kubenswrapper[4751]: I0131 14:59:26.797881 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-config-data\") pod \"glance-db-sync-lvpgz\" (UID: \"5c7b87c6-2803-4ae5-9257-1a7e12d26f61\") " pod="glance-kuttl-tests/glance-db-sync-lvpgz" Jan 31 14:59:27 crc kubenswrapper[4751]: I0131 14:59:27.067329 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-fgjx2" Jan 31 14:59:27 crc kubenswrapper[4751]: I0131 14:59:27.077143 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-lvpgz" Jan 31 14:59:30 crc kubenswrapper[4751]: I0131 14:59:30.630467 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-lvpgz"] Jan 31 14:59:30 crc kubenswrapper[4751]: I0131 14:59:30.775239 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"d360673b-7556-44b9-b7bd-4805810da349","Type":"ContainerStarted","Data":"44aed1a495029b4f70f43e4d99769c88600905aa98e64ff530f4a6570f61d3ae"} Jan 31 14:59:30 crc kubenswrapper[4751]: I0131 14:59:30.777736 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-lvpgz" event={"ID":"5c7b87c6-2803-4ae5-9257-1a7e12d26f61","Type":"ContainerStarted","Data":"3231b5b68a66beba965e16caaa4e761a73c20400a40b39c6426b49ba73bc4dac"} Jan 31 14:59:30 crc kubenswrapper[4751]: I0131 14:59:30.792199 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstackclient" podStartSLOduration=1.6500771429999999 podStartE2EDuration="10.792181933s" podCreationTimestamp="2026-01-31 14:59:20 +0000 UTC" firstStartedPulling="2026-01-31 14:59:21.245181359 +0000 UTC m=+1065.619894244" lastFinishedPulling="2026-01-31 14:59:30.387286149 +0000 UTC m=+1074.761999034" observedRunningTime="2026-01-31 14:59:30.790977641 +0000 UTC m=+1075.165690526" watchObservedRunningTime="2026-01-31 14:59:30.792181933 +0000 UTC m=+1075.166894828" Jan 31 14:59:42 crc kubenswrapper[4751]: I0131 14:59:42.868562 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-lvpgz" event={"ID":"5c7b87c6-2803-4ae5-9257-1a7e12d26f61","Type":"ContainerStarted","Data":"4d83615719bc342a748610c14a746dcc08356aa04afe079d5f96b964e25ed0f6"} Jan 31 14:59:42 crc kubenswrapper[4751]: I0131 14:59:42.892387 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-lvpgz" podStartSLOduration=6.627121364 podStartE2EDuration="17.892369155s" podCreationTimestamp="2026-01-31 14:59:25 +0000 UTC" firstStartedPulling="2026-01-31 14:59:30.641059645 +0000 UTC m=+1075.015772530" lastFinishedPulling="2026-01-31 14:59:41.906307406 +0000 UTC m=+1086.281020321" observedRunningTime="2026-01-31 14:59:42.88383621 +0000 UTC m=+1087.258549095" watchObservedRunningTime="2026-01-31 14:59:42.892369155 +0000 UTC m=+1087.267082040" Jan 31 14:59:49 crc kubenswrapper[4751]: I0131 14:59:49.931239 4751 generic.go:334] "Generic (PLEG): container finished" podID="5c7b87c6-2803-4ae5-9257-1a7e12d26f61" containerID="4d83615719bc342a748610c14a746dcc08356aa04afe079d5f96b964e25ed0f6" exitCode=0 Jan 31 14:59:49 crc kubenswrapper[4751]: I0131 14:59:49.931355 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-lvpgz" event={"ID":"5c7b87c6-2803-4ae5-9257-1a7e12d26f61","Type":"ContainerDied","Data":"4d83615719bc342a748610c14a746dcc08356aa04afe079d5f96b964e25ed0f6"} Jan 31 14:59:51 crc kubenswrapper[4751]: I0131 14:59:51.235427 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-lvpgz" Jan 31 14:59:51 crc kubenswrapper[4751]: I0131 14:59:51.357229 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-config-data\") pod \"5c7b87c6-2803-4ae5-9257-1a7e12d26f61\" (UID: \"5c7b87c6-2803-4ae5-9257-1a7e12d26f61\") " Jan 31 14:59:51 crc kubenswrapper[4751]: I0131 14:59:51.357334 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-db-sync-config-data\") pod \"5c7b87c6-2803-4ae5-9257-1a7e12d26f61\" (UID: \"5c7b87c6-2803-4ae5-9257-1a7e12d26f61\") " Jan 31 14:59:51 crc kubenswrapper[4751]: I0131 14:59:51.357370 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bvkld\" (UniqueName: \"kubernetes.io/projected/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-kube-api-access-bvkld\") pod \"5c7b87c6-2803-4ae5-9257-1a7e12d26f61\" (UID: \"5c7b87c6-2803-4ae5-9257-1a7e12d26f61\") " Jan 31 14:59:51 crc kubenswrapper[4751]: I0131 14:59:51.363562 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-kube-api-access-bvkld" (OuterVolumeSpecName: "kube-api-access-bvkld") pod "5c7b87c6-2803-4ae5-9257-1a7e12d26f61" (UID: "5c7b87c6-2803-4ae5-9257-1a7e12d26f61"). InnerVolumeSpecName "kube-api-access-bvkld". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 14:59:51 crc kubenswrapper[4751]: I0131 14:59:51.364051 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "5c7b87c6-2803-4ae5-9257-1a7e12d26f61" (UID: "5c7b87c6-2803-4ae5-9257-1a7e12d26f61"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:59:51 crc kubenswrapper[4751]: I0131 14:59:51.404538 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-config-data" (OuterVolumeSpecName: "config-data") pod "5c7b87c6-2803-4ae5-9257-1a7e12d26f61" (UID: "5c7b87c6-2803-4ae5-9257-1a7e12d26f61"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 14:59:51 crc kubenswrapper[4751]: I0131 14:59:51.459881 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:51 crc kubenswrapper[4751]: I0131 14:59:51.459943 4751 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:51 crc kubenswrapper[4751]: I0131 14:59:51.459967 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bvkld\" (UniqueName: \"kubernetes.io/projected/5c7b87c6-2803-4ae5-9257-1a7e12d26f61-kube-api-access-bvkld\") on node \"crc\" DevicePath \"\"" Jan 31 14:59:51 crc kubenswrapper[4751]: I0131 14:59:51.947347 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-lvpgz" event={"ID":"5c7b87c6-2803-4ae5-9257-1a7e12d26f61","Type":"ContainerDied","Data":"3231b5b68a66beba965e16caaa4e761a73c20400a40b39c6426b49ba73bc4dac"} Jan 31 14:59:51 crc kubenswrapper[4751]: I0131 14:59:51.947664 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3231b5b68a66beba965e16caaa4e761a73c20400a40b39c6426b49ba73bc4dac" Jan 31 14:59:51 crc kubenswrapper[4751]: I0131 14:59:51.947434 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-lvpgz" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.155958 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 14:59:53 crc kubenswrapper[4751]: E0131 14:59:53.156296 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c7b87c6-2803-4ae5-9257-1a7e12d26f61" containerName="glance-db-sync" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.156313 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c7b87c6-2803-4ae5-9257-1a7e12d26f61" containerName="glance-db-sync" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.156473 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c7b87c6-2803-4ae5-9257-1a7e12d26f61" containerName="glance-db-sync" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.157182 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.158853 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-fgjx2" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.159637 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.161529 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.184154 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.286251 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-lib-modules\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.286503 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.286530 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-dev\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.286586 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53e80c85-256f-4e3a-8338-091b69c8a111-scripts\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.286607 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.286620 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-sys\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.286634 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e80c85-256f-4e3a-8338-091b69c8a111-logs\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.286654 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p2cjr\" (UniqueName: \"kubernetes.io/projected/53e80c85-256f-4e3a-8338-091b69c8a111-kube-api-access-p2cjr\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.286676 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.286714 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-run\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.286736 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53e80c85-256f-4e3a-8338-091b69c8a111-httpd-run\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.286753 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-etc-nvme\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.286770 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.286788 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e80c85-256f-4e3a-8338-091b69c8a111-config-data\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.305864 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.306966 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.324501 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388429 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac0f9efc-607e-4d26-8677-3cfdbcae5644-config-data\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388488 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388551 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac0f9efc-607e-4d26-8677-3cfdbcae5644-logs\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388576 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-lib-modules\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388618 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-sys\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388638 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-run\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388662 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-run\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388692 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-run\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388704 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53e80c85-256f-4e3a-8338-091b69c8a111-httpd-run\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388775 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-etc-nvme\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388801 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388823 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e80c85-256f-4e3a-8338-091b69c8a111-config-data\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388842 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") device mount path \"/mnt/openstack/pv02\"" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388934 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-etc-nvme\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.388847 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t65v9\" (UniqueName: \"kubernetes.io/projected/ac0f9efc-607e-4d26-8677-3cfdbcae5644-kube-api-access-t65v9\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389208 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53e80c85-256f-4e3a-8338-091b69c8a111-httpd-run\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389233 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389300 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-lib-modules\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389354 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389396 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-dev\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389426 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-dev\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389424 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389395 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-lib-modules\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389509 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-dev\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389529 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389552 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac0f9efc-607e-4d26-8677-3cfdbcae5644-scripts\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389579 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac0f9efc-607e-4d26-8677-3cfdbcae5644-httpd-run\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389625 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-etc-nvme\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389650 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53e80c85-256f-4e3a-8338-091b69c8a111-scripts\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389672 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389694 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389715 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389734 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-sys\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389756 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e80c85-256f-4e3a-8338-091b69c8a111-logs\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389789 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p2cjr\" (UniqueName: \"kubernetes.io/projected/53e80c85-256f-4e3a-8338-091b69c8a111-kube-api-access-p2cjr\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.389856 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.390065 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-sys\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.390402 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e80c85-256f-4e3a-8338-091b69c8a111-logs\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.404305 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53e80c85-256f-4e3a-8338-091b69c8a111-scripts\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.405988 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e80c85-256f-4e3a-8338-091b69c8a111-config-data\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.410543 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.412541 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p2cjr\" (UniqueName: \"kubernetes.io/projected/53e80c85-256f-4e3a-8338-091b69c8a111-kube-api-access-p2cjr\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.412736 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-single-1\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.481777 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.490913 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.490974 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t65v9\" (UniqueName: \"kubernetes.io/projected/ac0f9efc-607e-4d26-8677-3cfdbcae5644-kube-api-access-t65v9\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491009 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-dev\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491054 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491061 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491097 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac0f9efc-607e-4d26-8677-3cfdbcae5644-scripts\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491138 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac0f9efc-607e-4d26-8677-3cfdbcae5644-httpd-run\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491141 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-dev\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491176 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-etc-nvme\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491209 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491232 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491274 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac0f9efc-607e-4d26-8677-3cfdbcae5644-config-data\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491294 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-etc-nvme\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491308 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac0f9efc-607e-4d26-8677-3cfdbcae5644-logs\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491323 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491333 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491336 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-lib-modules\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491370 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-sys\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491394 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-run\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491408 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491452 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-sys\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491476 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-run\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491868 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac0f9efc-607e-4d26-8677-3cfdbcae5644-logs\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491876 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac0f9efc-607e-4d26-8677-3cfdbcae5644-httpd-run\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.491997 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-lib-modules\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.497424 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac0f9efc-607e-4d26-8677-3cfdbcae5644-config-data\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.498496 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac0f9efc-607e-4d26-8677-3cfdbcae5644-scripts\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.512423 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.516415 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t65v9\" (UniqueName: \"kubernetes.io/projected/ac0f9efc-607e-4d26-8677-3cfdbcae5644-kube-api-access-t65v9\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.519611 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.621520 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.824264 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 14:59:53 crc kubenswrapper[4751]: W0131 14:59:53.827143 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod53e80c85_256f_4e3a_8338_091b69c8a111.slice/crio-d0fc51af73d94f86e3cd1b0621a38ca7cd14201bdbba30a0fccb4019efc30e6f WatchSource:0}: Error finding container d0fc51af73d94f86e3cd1b0621a38ca7cd14201bdbba30a0fccb4019efc30e6f: Status 404 returned error can't find the container with id d0fc51af73d94f86e3cd1b0621a38ca7cd14201bdbba30a0fccb4019efc30e6f Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.928538 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.969371 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"ac0f9efc-607e-4d26-8677-3cfdbcae5644","Type":"ContainerStarted","Data":"ed8251e3c7704b6272035a19ef7b83f396abd3798eb2dc76182eb938556d6f1b"} Jan 31 14:59:53 crc kubenswrapper[4751]: I0131 14:59:53.970633 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"53e80c85-256f-4e3a-8338-091b69c8a111","Type":"ContainerStarted","Data":"d0fc51af73d94f86e3cd1b0621a38ca7cd14201bdbba30a0fccb4019efc30e6f"} Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.030800 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"53e80c85-256f-4e3a-8338-091b69c8a111","Type":"ContainerStarted","Data":"2470407eb06da53e051c9bcfd402a9b5b782f16d58c74eaa361abad6c79fcccd"} Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.162807 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld"] Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.163915 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.196532 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh"] Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.197635 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.204411 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh"] Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.205359 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.207025 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.207195 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.208034 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gfv7v\" (UniqueName: \"kubernetes.io/projected/4359ffb3-e292-485f-b762-e131f9a9e869-kube-api-access-gfv7v\") pod \"glance-cache-glance-default-single-1-cleaner-29497860-zg4ld\" (UID: \"4359ffb3-e292-485f-b762-e131f9a9e869\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.208099 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-cache-glance-default-single-1-cleaner-29497860-zg4ld\" (UID: \"4359ffb3-e292-485f-b762-e131f9a9e869\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.208183 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/4359ffb3-e292-485f-b762-e131f9a9e869-image-cache-config-data\") pod \"glance-cache-glance-default-single-1-cleaner-29497860-zg4ld\" (UID: \"4359ffb3-e292-485f-b762-e131f9a9e869\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.225827 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh"] Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.234405 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-cache-glance-default-single-1-cleaner-29497860-zg4ld\" (UID: \"4359ffb3-e292-485f-b762-e131f9a9e869\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.273402 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld"] Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.297940 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh"] Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.309654 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mdt5\" (UniqueName: \"kubernetes.io/projected/d5db9258-7fae-47d2-acf9-c523d3d87193-kube-api-access-7mdt5\") pod \"glance-cache-glance-default-single-0-cleaner-29497860-lwlxh\" (UID: \"d5db9258-7fae-47d2-acf9-c523d3d87193\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.309704 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-cache-glance-default-single-0-cleaner-29497860-lwlxh\" (UID: \"d5db9258-7fae-47d2-acf9-c523d3d87193\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.309737 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/d5db9258-7fae-47d2-acf9-c523d3d87193-image-cache-config-data\") pod \"glance-cache-glance-default-single-0-cleaner-29497860-lwlxh\" (UID: \"d5db9258-7fae-47d2-acf9-c523d3d87193\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.309780 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gfv7v\" (UniqueName: \"kubernetes.io/projected/4359ffb3-e292-485f-b762-e131f9a9e869-kube-api-access-gfv7v\") pod \"glance-cache-glance-default-single-1-cleaner-29497860-zg4ld\" (UID: \"4359ffb3-e292-485f-b762-e131f9a9e869\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.309809 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zp59q\" (UniqueName: \"kubernetes.io/projected/c304f066-d32f-4ebc-af80-09f3680a14cd-kube-api-access-zp59q\") pod \"collect-profiles-29497860-xjmxh\" (UID: \"c304f066-d32f-4ebc-af80-09f3680a14cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.309838 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c304f066-d32f-4ebc-af80-09f3680a14cd-config-volume\") pod \"collect-profiles-29497860-xjmxh\" (UID: \"c304f066-d32f-4ebc-af80-09f3680a14cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.309916 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/4359ffb3-e292-485f-b762-e131f9a9e869-image-cache-config-data\") pod \"glance-cache-glance-default-single-1-cleaner-29497860-zg4ld\" (UID: \"4359ffb3-e292-485f-b762-e131f9a9e869\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.309947 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c304f066-d32f-4ebc-af80-09f3680a14cd-secret-volume\") pod \"collect-profiles-29497860-xjmxh\" (UID: \"c304f066-d32f-4ebc-af80-09f3680a14cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.314793 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/4359ffb3-e292-485f-b762-e131f9a9e869-image-cache-config-data\") pod \"glance-cache-glance-default-single-1-cleaner-29497860-zg4ld\" (UID: \"4359ffb3-e292-485f-b762-e131f9a9e869\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.323734 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gfv7v\" (UniqueName: \"kubernetes.io/projected/4359ffb3-e292-485f-b762-e131f9a9e869-kube-api-access-gfv7v\") pod \"glance-cache-glance-default-single-1-cleaner-29497860-zg4ld\" (UID: \"4359ffb3-e292-485f-b762-e131f9a9e869\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.326391 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-cache-glance-default-single-0-cleaner-29497860-lwlxh\" (UID: \"d5db9258-7fae-47d2-acf9-c523d3d87193\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.411275 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c304f066-d32f-4ebc-af80-09f3680a14cd-secret-volume\") pod \"collect-profiles-29497860-xjmxh\" (UID: \"c304f066-d32f-4ebc-af80-09f3680a14cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.411329 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mdt5\" (UniqueName: \"kubernetes.io/projected/d5db9258-7fae-47d2-acf9-c523d3d87193-kube-api-access-7mdt5\") pod \"glance-cache-glance-default-single-0-cleaner-29497860-lwlxh\" (UID: \"d5db9258-7fae-47d2-acf9-c523d3d87193\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.411378 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/d5db9258-7fae-47d2-acf9-c523d3d87193-image-cache-config-data\") pod \"glance-cache-glance-default-single-0-cleaner-29497860-lwlxh\" (UID: \"d5db9258-7fae-47d2-acf9-c523d3d87193\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.411413 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zp59q\" (UniqueName: \"kubernetes.io/projected/c304f066-d32f-4ebc-af80-09f3680a14cd-kube-api-access-zp59q\") pod \"collect-profiles-29497860-xjmxh\" (UID: \"c304f066-d32f-4ebc-af80-09f3680a14cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.411443 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c304f066-d32f-4ebc-af80-09f3680a14cd-config-volume\") pod \"collect-profiles-29497860-xjmxh\" (UID: \"c304f066-d32f-4ebc-af80-09f3680a14cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.414477 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c304f066-d32f-4ebc-af80-09f3680a14cd-config-volume\") pod \"collect-profiles-29497860-xjmxh\" (UID: \"c304f066-d32f-4ebc-af80-09f3680a14cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.418704 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/d5db9258-7fae-47d2-acf9-c523d3d87193-image-cache-config-data\") pod \"glance-cache-glance-default-single-0-cleaner-29497860-lwlxh\" (UID: \"d5db9258-7fae-47d2-acf9-c523d3d87193\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.419629 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c304f066-d32f-4ebc-af80-09f3680a14cd-secret-volume\") pod \"collect-profiles-29497860-xjmxh\" (UID: \"c304f066-d32f-4ebc-af80-09f3680a14cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.436253 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zp59q\" (UniqueName: \"kubernetes.io/projected/c304f066-d32f-4ebc-af80-09f3680a14cd-kube-api-access-zp59q\") pod \"collect-profiles-29497860-xjmxh\" (UID: \"c304f066-d32f-4ebc-af80-09f3680a14cd\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.436507 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mdt5\" (UniqueName: \"kubernetes.io/projected/d5db9258-7fae-47d2-acf9-c523d3d87193-kube-api-access-7mdt5\") pod \"glance-cache-glance-default-single-0-cleaner-29497860-lwlxh\" (UID: \"d5db9258-7fae-47d2-acf9-c523d3d87193\") " pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.512955 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.584735 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" Jan 31 15:00:00 crc kubenswrapper[4751]: I0131 15:00:00.590424 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" Jan 31 15:00:01 crc kubenswrapper[4751]: I0131 15:00:00.923355 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh"] Jan 31 15:00:01 crc kubenswrapper[4751]: W0131 15:00:00.967747 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4359ffb3_e292_485f_b762_e131f9a9e869.slice/crio-44f28ab8d2a42d4dde7d20dd573c7f68bed70bab0f7831c586028f44bc7d7976 WatchSource:0}: Error finding container 44f28ab8d2a42d4dde7d20dd573c7f68bed70bab0f7831c586028f44bc7d7976: Status 404 returned error can't find the container with id 44f28ab8d2a42d4dde7d20dd573c7f68bed70bab0f7831c586028f44bc7d7976 Jan 31 15:00:01 crc kubenswrapper[4751]: I0131 15:00:00.968371 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld"] Jan 31 15:00:01 crc kubenswrapper[4751]: I0131 15:00:01.043480 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"53e80c85-256f-4e3a-8338-091b69c8a111","Type":"ContainerStarted","Data":"52fb0acaeff7876fc2ee5ab2cce699867c40acf2d6cd815e7e538721bdb941cf"} Jan 31 15:00:01 crc kubenswrapper[4751]: I0131 15:00:01.046630 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" event={"ID":"c304f066-d32f-4ebc-af80-09f3680a14cd","Type":"ContainerStarted","Data":"a126fb63db8c50b9a53697e61efa57746e16f049f13c9c76511062853100b24e"} Jan 31 15:00:01 crc kubenswrapper[4751]: I0131 15:00:01.054921 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" event={"ID":"4359ffb3-e292-485f-b762-e131f9a9e869","Type":"ContainerStarted","Data":"44f28ab8d2a42d4dde7d20dd573c7f68bed70bab0f7831c586028f44bc7d7976"} Jan 31 15:00:01 crc kubenswrapper[4751]: I0131 15:00:01.055105 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh"] Jan 31 15:00:01 crc kubenswrapper[4751]: W0131 15:00:01.062571 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd5db9258_7fae_47d2_acf9_c523d3d87193.slice/crio-a8ecc46e6208031fb6034bc65dbf5c3f1388055248427b8ae008f3d6bbbb7ead WatchSource:0}: Error finding container a8ecc46e6208031fb6034bc65dbf5c3f1388055248427b8ae008f3d6bbbb7ead: Status 404 returned error can't find the container with id a8ecc46e6208031fb6034bc65dbf5c3f1388055248427b8ae008f3d6bbbb7ead Jan 31 15:00:02 crc kubenswrapper[4751]: I0131 15:00:02.063151 4751 generic.go:334] "Generic (PLEG): container finished" podID="c304f066-d32f-4ebc-af80-09f3680a14cd" containerID="5407593bbe6a5401ebbef23950b4e278ee81abe2ac79b5eccd91e19538dc1615" exitCode=0 Jan 31 15:00:02 crc kubenswrapper[4751]: I0131 15:00:02.063220 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" event={"ID":"c304f066-d32f-4ebc-af80-09f3680a14cd","Type":"ContainerDied","Data":"5407593bbe6a5401ebbef23950b4e278ee81abe2ac79b5eccd91e19538dc1615"} Jan 31 15:00:02 crc kubenswrapper[4751]: I0131 15:00:02.065048 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" event={"ID":"d5db9258-7fae-47d2-acf9-c523d3d87193","Type":"ContainerStarted","Data":"522600cf4dfb7197c49e6a2fb7abef1d560bd673fb2da9388c38a54462595db0"} Jan 31 15:00:02 crc kubenswrapper[4751]: I0131 15:00:02.065086 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" event={"ID":"d5db9258-7fae-47d2-acf9-c523d3d87193","Type":"ContainerStarted","Data":"a8ecc46e6208031fb6034bc65dbf5c3f1388055248427b8ae008f3d6bbbb7ead"} Jan 31 15:00:02 crc kubenswrapper[4751]: I0131 15:00:02.066917 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"ac0f9efc-607e-4d26-8677-3cfdbcae5644","Type":"ContainerStarted","Data":"97cea10f80c97e4fa02eebedc4adec6206a4c26e20c548ca8591a97a3b1570e1"} Jan 31 15:00:02 crc kubenswrapper[4751]: I0131 15:00:02.066937 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"ac0f9efc-607e-4d26-8677-3cfdbcae5644","Type":"ContainerStarted","Data":"5ae99f624e52366771ec3f54c793000f4df8c5d1267908fa6b69f7cac8418069"} Jan 31 15:00:02 crc kubenswrapper[4751]: I0131 15:00:02.079616 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" event={"ID":"4359ffb3-e292-485f-b762-e131f9a9e869","Type":"ContainerStarted","Data":"fdff4dbce192cc3ad36befaed2781dd7252206ed773249a695ca5b7f5682312b"} Jan 31 15:00:02 crc kubenswrapper[4751]: I0131 15:00:02.127540 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=10.127514936 podStartE2EDuration="10.127514936s" podCreationTimestamp="2026-01-31 14:59:52 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:02.112939712 +0000 UTC m=+1106.487652597" watchObservedRunningTime="2026-01-31 15:00:02.127514936 +0000 UTC m=+1106.502227821" Jan 31 15:00:02 crc kubenswrapper[4751]: I0131 15:00:02.161470 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-1" podStartSLOduration=9.161451402 podStartE2EDuration="9.161451402s" podCreationTimestamp="2026-01-31 14:59:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:02.145220803 +0000 UTC m=+1106.519933688" watchObservedRunningTime="2026-01-31 15:00:02.161451402 +0000 UTC m=+1106.536164277" Jan 31 15:00:02 crc kubenswrapper[4751]: I0131 15:00:02.167877 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" podStartSLOduration=2.167859101 podStartE2EDuration="2.167859101s" podCreationTimestamp="2026-01-31 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:02.160910527 +0000 UTC m=+1106.535623432" watchObservedRunningTime="2026-01-31 15:00:02.167859101 +0000 UTC m=+1106.542571986" Jan 31 15:00:02 crc kubenswrapper[4751]: I0131 15:00:02.182164 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" podStartSLOduration=2.182146298 podStartE2EDuration="2.182146298s" podCreationTimestamp="2026-01-31 15:00:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:02.18071767 +0000 UTC m=+1106.555430575" watchObservedRunningTime="2026-01-31 15:00:02.182146298 +0000 UTC m=+1106.556859183" Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.089583 4751 generic.go:334] "Generic (PLEG): container finished" podID="4359ffb3-e292-485f-b762-e131f9a9e869" containerID="fdff4dbce192cc3ad36befaed2781dd7252206ed773249a695ca5b7f5682312b" exitCode=0 Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.089660 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" event={"ID":"4359ffb3-e292-485f-b762-e131f9a9e869","Type":"ContainerDied","Data":"fdff4dbce192cc3ad36befaed2781dd7252206ed773249a695ca5b7f5682312b"} Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.092450 4751 generic.go:334] "Generic (PLEG): container finished" podID="d5db9258-7fae-47d2-acf9-c523d3d87193" containerID="522600cf4dfb7197c49e6a2fb7abef1d560bd673fb2da9388c38a54462595db0" exitCode=0 Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.092501 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" event={"ID":"d5db9258-7fae-47d2-acf9-c523d3d87193","Type":"ContainerDied","Data":"522600cf4dfb7197c49e6a2fb7abef1d560bd673fb2da9388c38a54462595db0"} Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.445215 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.482291 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.482341 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.515718 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.528318 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.568770 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c304f066-d32f-4ebc-af80-09f3680a14cd-secret-volume\") pod \"c304f066-d32f-4ebc-af80-09f3680a14cd\" (UID: \"c304f066-d32f-4ebc-af80-09f3680a14cd\") " Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.568854 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zp59q\" (UniqueName: \"kubernetes.io/projected/c304f066-d32f-4ebc-af80-09f3680a14cd-kube-api-access-zp59q\") pod \"c304f066-d32f-4ebc-af80-09f3680a14cd\" (UID: \"c304f066-d32f-4ebc-af80-09f3680a14cd\") " Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.568960 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c304f066-d32f-4ebc-af80-09f3680a14cd-config-volume\") pod \"c304f066-d32f-4ebc-af80-09f3680a14cd\" (UID: \"c304f066-d32f-4ebc-af80-09f3680a14cd\") " Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.569705 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c304f066-d32f-4ebc-af80-09f3680a14cd-config-volume" (OuterVolumeSpecName: "config-volume") pod "c304f066-d32f-4ebc-af80-09f3680a14cd" (UID: "c304f066-d32f-4ebc-af80-09f3680a14cd"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.570099 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c304f066-d32f-4ebc-af80-09f3680a14cd-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.574170 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c304f066-d32f-4ebc-af80-09f3680a14cd-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "c304f066-d32f-4ebc-af80-09f3680a14cd" (UID: "c304f066-d32f-4ebc-af80-09f3680a14cd"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.574536 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c304f066-d32f-4ebc-af80-09f3680a14cd-kube-api-access-zp59q" (OuterVolumeSpecName: "kube-api-access-zp59q") pod "c304f066-d32f-4ebc-af80-09f3680a14cd" (UID: "c304f066-d32f-4ebc-af80-09f3680a14cd"). InnerVolumeSpecName "kube-api-access-zp59q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.622976 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.623019 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.646996 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.660619 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.671039 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/c304f066-d32f-4ebc-af80-09f3680a14cd-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:03 crc kubenswrapper[4751]: I0131 15:00:03.671087 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zp59q\" (UniqueName: \"kubernetes.io/projected/c304f066-d32f-4ebc-af80-09f3680a14cd-kube-api-access-zp59q\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.103428 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" event={"ID":"c304f066-d32f-4ebc-af80-09f3680a14cd","Type":"ContainerDied","Data":"a126fb63db8c50b9a53697e61efa57746e16f049f13c9c76511062853100b24e"} Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.103828 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a126fb63db8c50b9a53697e61efa57746e16f049f13c9c76511062853100b24e" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.103782 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497860-xjmxh" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.104282 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.104317 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.104351 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.104363 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.425097 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.485869 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/4359ffb3-e292-485f-b762-e131f9a9e869-image-cache-config-data\") pod \"4359ffb3-e292-485f-b762-e131f9a9e869\" (UID: \"4359ffb3-e292-485f-b762-e131f9a9e869\") " Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.485927 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gfv7v\" (UniqueName: \"kubernetes.io/projected/4359ffb3-e292-485f-b762-e131f9a9e869-kube-api-access-gfv7v\") pod \"4359ffb3-e292-485f-b762-e131f9a9e869\" (UID: \"4359ffb3-e292-485f-b762-e131f9a9e869\") " Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.486010 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"4359ffb3-e292-485f-b762-e131f9a9e869\" (UID: \"4359ffb3-e292-485f-b762-e131f9a9e869\") " Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.497796 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/4359ffb3-e292-485f-b762-e131f9a9e869-image-cache-config-data" (OuterVolumeSpecName: "image-cache-config-data") pod "4359ffb3-e292-485f-b762-e131f9a9e869" (UID: "4359ffb3-e292-485f-b762-e131f9a9e869"). InnerVolumeSpecName "image-cache-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.499606 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance-cache") pod "4359ffb3-e292-485f-b762-e131f9a9e869" (UID: "4359ffb3-e292-485f-b762-e131f9a9e869"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.504680 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4359ffb3-e292-485f-b762-e131f9a9e869-kube-api-access-gfv7v" (OuterVolumeSpecName: "kube-api-access-gfv7v") pod "4359ffb3-e292-485f-b762-e131f9a9e869" (UID: "4359ffb3-e292-485f-b762-e131f9a9e869"). InnerVolumeSpecName "kube-api-access-gfv7v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.542614 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.588131 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mdt5\" (UniqueName: \"kubernetes.io/projected/d5db9258-7fae-47d2-acf9-c523d3d87193-kube-api-access-7mdt5\") pod \"d5db9258-7fae-47d2-acf9-c523d3d87193\" (UID: \"d5db9258-7fae-47d2-acf9-c523d3d87193\") " Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.588194 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/d5db9258-7fae-47d2-acf9-c523d3d87193-image-cache-config-data\") pod \"d5db9258-7fae-47d2-acf9-c523d3d87193\" (UID: \"d5db9258-7fae-47d2-acf9-c523d3d87193\") " Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.588220 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"d5db9258-7fae-47d2-acf9-c523d3d87193\" (UID: \"d5db9258-7fae-47d2-acf9-c523d3d87193\") " Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.588628 4751 reconciler_common.go:293] "Volume detached for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/4359ffb3-e292-485f-b762-e131f9a9e869-image-cache-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.588642 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gfv7v\" (UniqueName: \"kubernetes.io/projected/4359ffb3-e292-485f-b762-e131f9a9e869-kube-api-access-gfv7v\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.590986 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d5db9258-7fae-47d2-acf9-c523d3d87193-image-cache-config-data" (OuterVolumeSpecName: "image-cache-config-data") pod "d5db9258-7fae-47d2-acf9-c523d3d87193" (UID: "d5db9258-7fae-47d2-acf9-c523d3d87193"). InnerVolumeSpecName "image-cache-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.592339 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance-cache") pod "d5db9258-7fae-47d2-acf9-c523d3d87193" (UID: "d5db9258-7fae-47d2-acf9-c523d3d87193"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.592392 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d5db9258-7fae-47d2-acf9-c523d3d87193-kube-api-access-7mdt5" (OuterVolumeSpecName: "kube-api-access-7mdt5") pod "d5db9258-7fae-47d2-acf9-c523d3d87193" (UID: "d5db9258-7fae-47d2-acf9-c523d3d87193"). InnerVolumeSpecName "kube-api-access-7mdt5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.690466 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mdt5\" (UniqueName: \"kubernetes.io/projected/d5db9258-7fae-47d2-acf9-c523d3d87193-kube-api-access-7mdt5\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:04 crc kubenswrapper[4751]: I0131 15:00:04.690514 4751 reconciler_common.go:293] "Volume detached for volume \"image-cache-config-data\" (UniqueName: \"kubernetes.io/secret/d5db9258-7fae-47d2-acf9-c523d3d87193-image-cache-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:05 crc kubenswrapper[4751]: I0131 15:00:05.111370 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" Jan 31 15:00:05 crc kubenswrapper[4751]: I0131 15:00:05.111413 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld" event={"ID":"4359ffb3-e292-485f-b762-e131f9a9e869","Type":"ContainerDied","Data":"44f28ab8d2a42d4dde7d20dd573c7f68bed70bab0f7831c586028f44bc7d7976"} Jan 31 15:00:05 crc kubenswrapper[4751]: I0131 15:00:05.111456 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="44f28ab8d2a42d4dde7d20dd573c7f68bed70bab0f7831c586028f44bc7d7976" Jan 31 15:00:05 crc kubenswrapper[4751]: I0131 15:00:05.113097 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" event={"ID":"d5db9258-7fae-47d2-acf9-c523d3d87193","Type":"ContainerDied","Data":"a8ecc46e6208031fb6034bc65dbf5c3f1388055248427b8ae008f3d6bbbb7ead"} Jan 31 15:00:05 crc kubenswrapper[4751]: I0131 15:00:05.113155 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a8ecc46e6208031fb6034bc65dbf5c3f1388055248427b8ae008f3d6bbbb7ead" Jan 31 15:00:05 crc kubenswrapper[4751]: I0131 15:00:05.113159 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh" Jan 31 15:00:06 crc kubenswrapper[4751]: I0131 15:00:06.135922 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:00:07 crc kubenswrapper[4751]: I0131 15:00:07.049461 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:00:07 crc kubenswrapper[4751]: I0131 15:00:07.121021 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:00:07 crc kubenswrapper[4751]: I0131 15:00:07.121252 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="ac0f9efc-607e-4d26-8677-3cfdbcae5644" containerName="glance-log" containerID="cri-o://5ae99f624e52366771ec3f54c793000f4df8c5d1267908fa6b69f7cac8418069" gracePeriod=30 Jan 31 15:00:07 crc kubenswrapper[4751]: I0131 15:00:07.121894 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="ac0f9efc-607e-4d26-8677-3cfdbcae5644" containerName="glance-httpd" containerID="cri-o://97cea10f80c97e4fa02eebedc4adec6206a4c26e20c548ca8591a97a3b1570e1" gracePeriod=30 Jan 31 15:00:07 crc kubenswrapper[4751]: I0131 15:00:07.132939 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="ac0f9efc-607e-4d26-8677-3cfdbcae5644" containerName="glance-log" probeResult="failure" output="Get \"http://10.217.0.100:9292/healthcheck\": EOF" Jan 31 15:00:07 crc kubenswrapper[4751]: I0131 15:00:07.139349 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="ac0f9efc-607e-4d26-8677-3cfdbcae5644" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.100:9292/healthcheck\": EOF" Jan 31 15:00:08 crc kubenswrapper[4751]: I0131 15:00:08.149154 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac0f9efc-607e-4d26-8677-3cfdbcae5644" containerID="5ae99f624e52366771ec3f54c793000f4df8c5d1267908fa6b69f7cac8418069" exitCode=143 Jan 31 15:00:08 crc kubenswrapper[4751]: I0131 15:00:08.149227 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"ac0f9efc-607e-4d26-8677-3cfdbcae5644","Type":"ContainerDied","Data":"5ae99f624e52366771ec3f54c793000f4df8c5d1267908fa6b69f7cac8418069"} Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.216630 4751 generic.go:334] "Generic (PLEG): container finished" podID="ac0f9efc-607e-4d26-8677-3cfdbcae5644" containerID="97cea10f80c97e4fa02eebedc4adec6206a4c26e20c548ca8591a97a3b1570e1" exitCode=0 Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.216809 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"ac0f9efc-607e-4d26-8677-3cfdbcae5644","Type":"ContainerDied","Data":"97cea10f80c97e4fa02eebedc4adec6206a4c26e20c548ca8591a97a3b1570e1"} Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.693333 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.826735 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac0f9efc-607e-4d26-8677-3cfdbcae5644-config-data\") pod \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.826821 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-sys\") pod \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.826861 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-dev\") pod \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.826907 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-run\") pod \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.827011 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac0f9efc-607e-4d26-8677-3cfdbcae5644-httpd-run\") pod \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.827053 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.827162 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t65v9\" (UniqueName: \"kubernetes.io/projected/ac0f9efc-607e-4d26-8677-3cfdbcae5644-kube-api-access-t65v9\") pod \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.827199 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-dev" (OuterVolumeSpecName: "dev") pod "ac0f9efc-607e-4d26-8677-3cfdbcae5644" (UID: "ac0f9efc-607e-4d26-8677-3cfdbcae5644"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.827243 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac0f9efc-607e-4d26-8677-3cfdbcae5644-scripts\") pod \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.827296 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-etc-nvme\") pod \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.827400 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac0f9efc-607e-4d26-8677-3cfdbcae5644-logs\") pod \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.827469 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.827533 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-lib-modules\") pod \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.827583 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-etc-iscsi\") pod \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.827627 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-var-locks-brick\") pod \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\" (UID: \"ac0f9efc-607e-4d26-8677-3cfdbcae5644\") " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.827865 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-sys" (OuterVolumeSpecName: "sys") pod "ac0f9efc-607e-4d26-8677-3cfdbcae5644" (UID: "ac0f9efc-607e-4d26-8677-3cfdbcae5644"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.827891 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-run" (OuterVolumeSpecName: "run") pod "ac0f9efc-607e-4d26-8677-3cfdbcae5644" (UID: "ac0f9efc-607e-4d26-8677-3cfdbcae5644"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.828083 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "ac0f9efc-607e-4d26-8677-3cfdbcae5644" (UID: "ac0f9efc-607e-4d26-8677-3cfdbcae5644"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.828160 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "ac0f9efc-607e-4d26-8677-3cfdbcae5644" (UID: "ac0f9efc-607e-4d26-8677-3cfdbcae5644"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.828155 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "ac0f9efc-607e-4d26-8677-3cfdbcae5644" (UID: "ac0f9efc-607e-4d26-8677-3cfdbcae5644"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.828193 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.828210 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.828222 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.828231 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.828225 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "ac0f9efc-607e-4d26-8677-3cfdbcae5644" (UID: "ac0f9efc-607e-4d26-8677-3cfdbcae5644"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.828401 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac0f9efc-607e-4d26-8677-3cfdbcae5644-logs" (OuterVolumeSpecName: "logs") pod "ac0f9efc-607e-4d26-8677-3cfdbcae5644" (UID: "ac0f9efc-607e-4d26-8677-3cfdbcae5644"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.828663 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ac0f9efc-607e-4d26-8677-3cfdbcae5644-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "ac0f9efc-607e-4d26-8677-3cfdbcae5644" (UID: "ac0f9efc-607e-4d26-8677-3cfdbcae5644"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.834260 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "ac0f9efc-607e-4d26-8677-3cfdbcae5644" (UID: "ac0f9efc-607e-4d26-8677-3cfdbcae5644"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.834780 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ac0f9efc-607e-4d26-8677-3cfdbcae5644-kube-api-access-t65v9" (OuterVolumeSpecName: "kube-api-access-t65v9") pod "ac0f9efc-607e-4d26-8677-3cfdbcae5644" (UID: "ac0f9efc-607e-4d26-8677-3cfdbcae5644"). InnerVolumeSpecName "kube-api-access-t65v9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.838185 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance-cache") pod "ac0f9efc-607e-4d26-8677-3cfdbcae5644" (UID: "ac0f9efc-607e-4d26-8677-3cfdbcae5644"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.842954 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac0f9efc-607e-4d26-8677-3cfdbcae5644-scripts" (OuterVolumeSpecName: "scripts") pod "ac0f9efc-607e-4d26-8677-3cfdbcae5644" (UID: "ac0f9efc-607e-4d26-8677-3cfdbcae5644"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.882898 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ac0f9efc-607e-4d26-8677-3cfdbcae5644-config-data" (OuterVolumeSpecName: "config-data") pod "ac0f9efc-607e-4d26-8677-3cfdbcae5644" (UID: "ac0f9efc-607e-4d26-8677-3cfdbcae5644"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.930182 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/ac0f9efc-607e-4d26-8677-3cfdbcae5644-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.930245 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.930263 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.930275 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.930290 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ac0f9efc-607e-4d26-8677-3cfdbcae5644-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.930304 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/ac0f9efc-607e-4d26-8677-3cfdbcae5644-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.930330 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.930343 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t65v9\" (UniqueName: \"kubernetes.io/projected/ac0f9efc-607e-4d26-8677-3cfdbcae5644-kube-api-access-t65v9\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.930356 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/ac0f9efc-607e-4d26-8677-3cfdbcae5644-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.930369 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/ac0f9efc-607e-4d26-8677-3cfdbcae5644-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.943296 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 15:00:14 crc kubenswrapper[4751]: I0131 15:00:14.949049 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.043245 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.043299 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.226484 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"ac0f9efc-607e-4d26-8677-3cfdbcae5644","Type":"ContainerDied","Data":"ed8251e3c7704b6272035a19ef7b83f396abd3798eb2dc76182eb938556d6f1b"} Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.226539 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.226568 4751 scope.go:117] "RemoveContainer" containerID="97cea10f80c97e4fa02eebedc4adec6206a4c26e20c548ca8591a97a3b1570e1" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.249143 4751 scope.go:117] "RemoveContainer" containerID="5ae99f624e52366771ec3f54c793000f4df8c5d1267908fa6b69f7cac8418069" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.262495 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.270110 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.295184 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:00:15 crc kubenswrapper[4751]: E0131 15:00:15.295623 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c304f066-d32f-4ebc-af80-09f3680a14cd" containerName="collect-profiles" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.295647 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="c304f066-d32f-4ebc-af80-09f3680a14cd" containerName="collect-profiles" Jan 31 15:00:15 crc kubenswrapper[4751]: E0131 15:00:15.295660 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4359ffb3-e292-485f-b762-e131f9a9e869" containerName="glance-cache-glance-default-single-1-cleaner" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.295672 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4359ffb3-e292-485f-b762-e131f9a9e869" containerName="glance-cache-glance-default-single-1-cleaner" Jan 31 15:00:15 crc kubenswrapper[4751]: E0131 15:00:15.295698 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0f9efc-607e-4d26-8677-3cfdbcae5644" containerName="glance-log" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.295709 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0f9efc-607e-4d26-8677-3cfdbcae5644" containerName="glance-log" Jan 31 15:00:15 crc kubenswrapper[4751]: E0131 15:00:15.295736 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d5db9258-7fae-47d2-acf9-c523d3d87193" containerName="glance-cache-glance-default-single-0-cleaner" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.295749 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d5db9258-7fae-47d2-acf9-c523d3d87193" containerName="glance-cache-glance-default-single-0-cleaner" Jan 31 15:00:15 crc kubenswrapper[4751]: E0131 15:00:15.295767 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ac0f9efc-607e-4d26-8677-3cfdbcae5644" containerName="glance-httpd" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.295777 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ac0f9efc-607e-4d26-8677-3cfdbcae5644" containerName="glance-httpd" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.295975 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d5db9258-7fae-47d2-acf9-c523d3d87193" containerName="glance-cache-glance-default-single-0-cleaner" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.296009 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="c304f066-d32f-4ebc-af80-09f3680a14cd" containerName="collect-profiles" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.296029 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac0f9efc-607e-4d26-8677-3cfdbcae5644" containerName="glance-httpd" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.296050 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4359ffb3-e292-485f-b762-e131f9a9e869" containerName="glance-cache-glance-default-single-1-cleaner" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.296095 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ac0f9efc-607e-4d26-8677-3cfdbcae5644" containerName="glance-log" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.297284 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.304893 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.447793 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.448127 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6f236ad-2ab6-4e51-b934-402f28844e69-logs\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.448159 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.448178 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-dev\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.448247 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-sys\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.448283 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6f236ad-2ab6-4e51-b934-402f28844e69-config-data\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.448312 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-lib-modules\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.448388 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-run\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.448438 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6f236ad-2ab6-4e51-b934-402f28844e69-scripts\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.448469 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wzc4c\" (UniqueName: \"kubernetes.io/projected/a6f236ad-2ab6-4e51-b934-402f28844e69-kube-api-access-wzc4c\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.448511 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6f236ad-2ab6-4e51-b934-402f28844e69-httpd-run\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.448546 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.448613 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-etc-nvme\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.448653 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.549664 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6f236ad-2ab6-4e51-b934-402f28844e69-logs\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.549707 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.549744 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-dev\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.549786 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-sys\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.549823 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6f236ad-2ab6-4e51-b934-402f28844e69-config-data\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.549851 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-dev\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.549864 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.549887 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-lib-modules\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.549854 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-lib-modules\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.549907 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-sys\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.549929 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-run\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.549961 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-run\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.550001 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6f236ad-2ab6-4e51-b934-402f28844e69-scripts\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.550023 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wzc4c\" (UniqueName: \"kubernetes.io/projected/a6f236ad-2ab6-4e51-b934-402f28844e69-kube-api-access-wzc4c\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.550039 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6f236ad-2ab6-4e51-b934-402f28844e69-httpd-run\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.550098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.550143 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-etc-nvme\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.550172 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.550233 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.550392 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.550525 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-etc-nvme\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.550556 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.550646 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6f236ad-2ab6-4e51-b934-402f28844e69-httpd-run\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.550679 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.551105 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6f236ad-2ab6-4e51-b934-402f28844e69-logs\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.557559 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6f236ad-2ab6-4e51-b934-402f28844e69-scripts\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.562027 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6f236ad-2ab6-4e51-b934-402f28844e69-config-data\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.582227 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wzc4c\" (UniqueName: \"kubernetes.io/projected/a6f236ad-2ab6-4e51-b934-402f28844e69-kube-api-access-wzc4c\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.584001 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.594119 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-0\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:15 crc kubenswrapper[4751]: I0131 15:00:15.630171 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:16 crc kubenswrapper[4751]: I0131 15:00:16.060120 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:00:16 crc kubenswrapper[4751]: I0131 15:00:16.235944 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"a6f236ad-2ab6-4e51-b934-402f28844e69","Type":"ContainerStarted","Data":"0404afa0dee3bb2591b16ea7fdc6a0ed77a19e078e63d50f945b13286beb2ed9"} Jan 31 15:00:16 crc kubenswrapper[4751]: I0131 15:00:16.415352 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ac0f9efc-607e-4d26-8677-3cfdbcae5644" path="/var/lib/kubelet/pods/ac0f9efc-607e-4d26-8677-3cfdbcae5644/volumes" Jan 31 15:00:17 crc kubenswrapper[4751]: I0131 15:00:17.245833 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"a6f236ad-2ab6-4e51-b934-402f28844e69","Type":"ContainerStarted","Data":"3d4c9658ef799ffc5bf29e9925845adf68e2321560adab813cd297c7b6dfe0e2"} Jan 31 15:00:17 crc kubenswrapper[4751]: I0131 15:00:17.247548 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"a6f236ad-2ab6-4e51-b934-402f28844e69","Type":"ContainerStarted","Data":"c1281edefeae3927db375b0c14eca77f9671b71769c05b60b41b1179bc1039fe"} Jan 31 15:00:17 crc kubenswrapper[4751]: I0131 15:00:17.276619 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=2.276596604 podStartE2EDuration="2.276596604s" podCreationTimestamp="2026-01-31 15:00:15 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:00:17.269137637 +0000 UTC m=+1121.643850572" watchObservedRunningTime="2026-01-31 15:00:17.276596604 +0000 UTC m=+1121.651309499" Jan 31 15:00:25 crc kubenswrapper[4751]: I0131 15:00:25.630643 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:25 crc kubenswrapper[4751]: I0131 15:00:25.631284 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:25 crc kubenswrapper[4751]: I0131 15:00:25.663160 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:25 crc kubenswrapper[4751]: I0131 15:00:25.668685 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:26 crc kubenswrapper[4751]: I0131 15:00:26.317494 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:26 crc kubenswrapper[4751]: I0131 15:00:26.317566 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:28 crc kubenswrapper[4751]: I0131 15:00:28.221164 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:00:28 crc kubenswrapper[4751]: I0131 15:00:28.224626 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:00 crc kubenswrapper[4751]: I0131 15:01:00.175605 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystone-cron-29497861-5bd6d"] Jan 31 15:01:00 crc kubenswrapper[4751]: I0131 15:01:00.177579 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" Jan 31 15:01:00 crc kubenswrapper[4751]: I0131 15:01:00.197512 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-cron-29497861-5bd6d"] Jan 31 15:01:00 crc kubenswrapper[4751]: I0131 15:01:00.257807 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-fernet-keys\") pod \"keystone-cron-29497861-5bd6d\" (UID: \"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d\") " pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" Jan 31 15:01:00 crc kubenswrapper[4751]: I0131 15:01:00.257906 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-config-data\") pod \"keystone-cron-29497861-5bd6d\" (UID: \"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d\") " pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" Jan 31 15:01:00 crc kubenswrapper[4751]: I0131 15:01:00.257960 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-czpkb\" (UniqueName: \"kubernetes.io/projected/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-kube-api-access-czpkb\") pod \"keystone-cron-29497861-5bd6d\" (UID: \"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d\") " pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" Jan 31 15:01:00 crc kubenswrapper[4751]: I0131 15:01:00.359020 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-config-data\") pod \"keystone-cron-29497861-5bd6d\" (UID: \"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d\") " pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" Jan 31 15:01:00 crc kubenswrapper[4751]: I0131 15:01:00.359119 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-czpkb\" (UniqueName: \"kubernetes.io/projected/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-kube-api-access-czpkb\") pod \"keystone-cron-29497861-5bd6d\" (UID: \"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d\") " pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" Jan 31 15:01:00 crc kubenswrapper[4751]: I0131 15:01:00.359227 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-fernet-keys\") pod \"keystone-cron-29497861-5bd6d\" (UID: \"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d\") " pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" Jan 31 15:01:00 crc kubenswrapper[4751]: I0131 15:01:00.369037 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-config-data\") pod \"keystone-cron-29497861-5bd6d\" (UID: \"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d\") " pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" Jan 31 15:01:00 crc kubenswrapper[4751]: I0131 15:01:00.376531 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-fernet-keys\") pod \"keystone-cron-29497861-5bd6d\" (UID: \"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d\") " pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" Jan 31 15:01:00 crc kubenswrapper[4751]: I0131 15:01:00.380904 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-czpkb\" (UniqueName: \"kubernetes.io/projected/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-kube-api-access-czpkb\") pod \"keystone-cron-29497861-5bd6d\" (UID: \"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d\") " pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" Jan 31 15:01:00 crc kubenswrapper[4751]: I0131 15:01:00.514807 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" Jan 31 15:01:01 crc kubenswrapper[4751]: I0131 15:01:01.002209 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystone-cron-29497861-5bd6d"] Jan 31 15:01:01 crc kubenswrapper[4751]: W0131 15:01:01.004118 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podbce6ceb9_5b0d_4ec7_9492_94dce9bb261d.slice/crio-e746be916cd4d51c51a7b8ba5c98afa99b72d12f05c9b8ae4e86df55efe466c0 WatchSource:0}: Error finding container e746be916cd4d51c51a7b8ba5c98afa99b72d12f05c9b8ae4e86df55efe466c0: Status 404 returned error can't find the container with id e746be916cd4d51c51a7b8ba5c98afa99b72d12f05c9b8ae4e86df55efe466c0 Jan 31 15:01:01 crc kubenswrapper[4751]: I0131 15:01:01.633225 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" event={"ID":"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d","Type":"ContainerStarted","Data":"4fd861ffb49593c05c4ca2dc031ea7913a88e0c31f7cbaf913eca6a5819336ff"} Jan 31 15:01:01 crc kubenswrapper[4751]: I0131 15:01:01.633519 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" event={"ID":"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d","Type":"ContainerStarted","Data":"e746be916cd4d51c51a7b8ba5c98afa99b72d12f05c9b8ae4e86df55efe466c0"} Jan 31 15:01:01 crc kubenswrapper[4751]: I0131 15:01:01.660333 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" podStartSLOduration=1.6603033470000002 podStartE2EDuration="1.660303347s" podCreationTimestamp="2026-01-31 15:01:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:01.647935021 +0000 UTC m=+1166.022647906" watchObservedRunningTime="2026-01-31 15:01:01.660303347 +0000 UTC m=+1166.035016262" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.174149 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-lvpgz"] Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.187109 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-lvpgz"] Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.195584 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glancebf79-account-delete-vglq2"] Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.196698 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancebf79-account-delete-vglq2" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.224121 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancebf79-account-delete-vglq2"] Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.289364 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mtfm\" (UniqueName: \"kubernetes.io/projected/6de76201-fcd1-48a2-8bba-dcdf63bbdf20-kube-api-access-7mtfm\") pod \"glancebf79-account-delete-vglq2\" (UID: \"6de76201-fcd1-48a2-8bba-dcdf63bbdf20\") " pod="glance-kuttl-tests/glancebf79-account-delete-vglq2" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.289433 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6de76201-fcd1-48a2-8bba-dcdf63bbdf20-operator-scripts\") pod \"glancebf79-account-delete-vglq2\" (UID: \"6de76201-fcd1-48a2-8bba-dcdf63bbdf20\") " pod="glance-kuttl-tests/glancebf79-account-delete-vglq2" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.291271 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.291542 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="a6f236ad-2ab6-4e51-b934-402f28844e69" containerName="glance-log" containerID="cri-o://3d4c9658ef799ffc5bf29e9925845adf68e2321560adab813cd297c7b6dfe0e2" gracePeriod=30 Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.291915 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="a6f236ad-2ab6-4e51-b934-402f28844e69" containerName="glance-httpd" containerID="cri-o://c1281edefeae3927db375b0c14eca77f9671b71769c05b60b41b1179bc1039fe" gracePeriod=30 Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.298595 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld"] Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.309144 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh"] Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.317556 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.317871 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="53e80c85-256f-4e3a-8338-091b69c8a111" containerName="glance-log" containerID="cri-o://2470407eb06da53e051c9bcfd402a9b5b782f16d58c74eaa361abad6c79fcccd" gracePeriod=30 Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.317946 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="53e80c85-256f-4e3a-8338-091b69c8a111" containerName="glance-httpd" containerID="cri-o://52fb0acaeff7876fc2ee5ab2cce699867c40acf2d6cd815e7e538721bdb941cf" gracePeriod=30 Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.325132 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-single-1-cleaner-29497860-zg4ld"] Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.329396 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-cache-glance-default-single-0-cleaner-29497860-lwlxh"] Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.381576 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.381960 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/openstackclient" podUID="d360673b-7556-44b9-b7bd-4805810da349" containerName="openstackclient" containerID="cri-o://44aed1a495029b4f70f43e4d99769c88600905aa98e64ff530f4a6570f61d3ae" gracePeriod=30 Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.390459 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7mtfm\" (UniqueName: \"kubernetes.io/projected/6de76201-fcd1-48a2-8bba-dcdf63bbdf20-kube-api-access-7mtfm\") pod \"glancebf79-account-delete-vglq2\" (UID: \"6de76201-fcd1-48a2-8bba-dcdf63bbdf20\") " pod="glance-kuttl-tests/glancebf79-account-delete-vglq2" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.390499 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6de76201-fcd1-48a2-8bba-dcdf63bbdf20-operator-scripts\") pod \"glancebf79-account-delete-vglq2\" (UID: \"6de76201-fcd1-48a2-8bba-dcdf63bbdf20\") " pod="glance-kuttl-tests/glancebf79-account-delete-vglq2" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.391343 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6de76201-fcd1-48a2-8bba-dcdf63bbdf20-operator-scripts\") pod \"glancebf79-account-delete-vglq2\" (UID: \"6de76201-fcd1-48a2-8bba-dcdf63bbdf20\") " pod="glance-kuttl-tests/glancebf79-account-delete-vglq2" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.418601 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4359ffb3-e292-485f-b762-e131f9a9e869" path="/var/lib/kubelet/pods/4359ffb3-e292-485f-b762-e131f9a9e869/volumes" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.419448 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c7b87c6-2803-4ae5-9257-1a7e12d26f61" path="/var/lib/kubelet/pods/5c7b87c6-2803-4ae5-9257-1a7e12d26f61/volumes" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.420132 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d5db9258-7fae-47d2-acf9-c523d3d87193" path="/var/lib/kubelet/pods/d5db9258-7fae-47d2-acf9-c523d3d87193/volumes" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.421151 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mtfm\" (UniqueName: \"kubernetes.io/projected/6de76201-fcd1-48a2-8bba-dcdf63bbdf20-kube-api-access-7mtfm\") pod \"glancebf79-account-delete-vglq2\" (UID: \"6de76201-fcd1-48a2-8bba-dcdf63bbdf20\") " pod="glance-kuttl-tests/glancebf79-account-delete-vglq2" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.518088 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancebf79-account-delete-vglq2" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.656315 4751 generic.go:334] "Generic (PLEG): container finished" podID="d360673b-7556-44b9-b7bd-4805810da349" containerID="44aed1a495029b4f70f43e4d99769c88600905aa98e64ff530f4a6570f61d3ae" exitCode=143 Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.656648 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"d360673b-7556-44b9-b7bd-4805810da349","Type":"ContainerDied","Data":"44aed1a495029b4f70f43e4d99769c88600905aa98e64ff530f4a6570f61d3ae"} Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.666964 4751 generic.go:334] "Generic (PLEG): container finished" podID="53e80c85-256f-4e3a-8338-091b69c8a111" containerID="2470407eb06da53e051c9bcfd402a9b5b782f16d58c74eaa361abad6c79fcccd" exitCode=143 Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.667002 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"53e80c85-256f-4e3a-8338-091b69c8a111","Type":"ContainerDied","Data":"2470407eb06da53e051c9bcfd402a9b5b782f16d58c74eaa361abad6c79fcccd"} Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.672992 4751 generic.go:334] "Generic (PLEG): container finished" podID="a6f236ad-2ab6-4e51-b934-402f28844e69" containerID="3d4c9658ef799ffc5bf29e9925845adf68e2321560adab813cd297c7b6dfe0e2" exitCode=143 Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.673054 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"a6f236ad-2ab6-4e51-b934-402f28844e69","Type":"ContainerDied","Data":"3d4c9658ef799ffc5bf29e9925845adf68e2321560adab813cd297c7b6dfe0e2"} Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.778620 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancebf79-account-delete-vglq2"] Jan 31 15:01:02 crc kubenswrapper[4751]: W0131 15:01:02.788379 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod6de76201_fcd1_48a2_8bba_dcdf63bbdf20.slice/crio-5bf286bf34a619105d42f4b703f7d5e3780035dd5d176dd0ce6e759bed3be659 WatchSource:0}: Error finding container 5bf286bf34a619105d42f4b703f7d5e3780035dd5d176dd0ce6e759bed3be659: Status 404 returned error can't find the container with id 5bf286bf34a619105d42f4b703f7d5e3780035dd5d176dd0ce6e759bed3be659 Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.814007 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.896082 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/d360673b-7556-44b9-b7bd-4805810da349-openstack-scripts\") pod \"d360673b-7556-44b9-b7bd-4805810da349\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.896404 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8ch7w\" (UniqueName: \"kubernetes.io/projected/d360673b-7556-44b9-b7bd-4805810da349-kube-api-access-8ch7w\") pod \"d360673b-7556-44b9-b7bd-4805810da349\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.896437 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d360673b-7556-44b9-b7bd-4805810da349-openstack-config\") pod \"d360673b-7556-44b9-b7bd-4805810da349\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.896467 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d360673b-7556-44b9-b7bd-4805810da349-openstack-config-secret\") pod \"d360673b-7556-44b9-b7bd-4805810da349\" (UID: \"d360673b-7556-44b9-b7bd-4805810da349\") " Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.896834 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d360673b-7556-44b9-b7bd-4805810da349-openstack-scripts" (OuterVolumeSpecName: "openstack-scripts") pod "d360673b-7556-44b9-b7bd-4805810da349" (UID: "d360673b-7556-44b9-b7bd-4805810da349"). InnerVolumeSpecName "openstack-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.897157 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/d360673b-7556-44b9-b7bd-4805810da349-openstack-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.926608 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d360673b-7556-44b9-b7bd-4805810da349-kube-api-access-8ch7w" (OuterVolumeSpecName: "kube-api-access-8ch7w") pod "d360673b-7556-44b9-b7bd-4805810da349" (UID: "d360673b-7556-44b9-b7bd-4805810da349"). InnerVolumeSpecName "kube-api-access-8ch7w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.927100 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d360673b-7556-44b9-b7bd-4805810da349-openstack-config" (OuterVolumeSpecName: "openstack-config") pod "d360673b-7556-44b9-b7bd-4805810da349" (UID: "d360673b-7556-44b9-b7bd-4805810da349"). InnerVolumeSpecName "openstack-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.938166 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d360673b-7556-44b9-b7bd-4805810da349-openstack-config-secret" (OuterVolumeSpecName: "openstack-config-secret") pod "d360673b-7556-44b9-b7bd-4805810da349" (UID: "d360673b-7556-44b9-b7bd-4805810da349"). InnerVolumeSpecName "openstack-config-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.999048 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8ch7w\" (UniqueName: \"kubernetes.io/projected/d360673b-7556-44b9-b7bd-4805810da349-kube-api-access-8ch7w\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.999097 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/d360673b-7556-44b9-b7bd-4805810da349-openstack-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:02 crc kubenswrapper[4751]: I0131 15:01:02.999128 4751 reconciler_common.go:293] "Volume detached for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/d360673b-7556-44b9-b7bd-4805810da349-openstack-config-secret\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:03 crc kubenswrapper[4751]: I0131 15:01:03.680384 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 15:01:03 crc kubenswrapper[4751]: I0131 15:01:03.682307 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"d360673b-7556-44b9-b7bd-4805810da349","Type":"ContainerDied","Data":"c2e3697d65b3597868569dcd055006b2a37c4a1b745665e4039d129867477c4a"} Jan 31 15:01:03 crc kubenswrapper[4751]: I0131 15:01:03.682379 4751 scope.go:117] "RemoveContainer" containerID="44aed1a495029b4f70f43e4d99769c88600905aa98e64ff530f4a6570f61d3ae" Jan 31 15:01:03 crc kubenswrapper[4751]: I0131 15:01:03.683902 4751 generic.go:334] "Generic (PLEG): container finished" podID="bce6ceb9-5b0d-4ec7-9492-94dce9bb261d" containerID="4fd861ffb49593c05c4ca2dc031ea7913a88e0c31f7cbaf913eca6a5819336ff" exitCode=0 Jan 31 15:01:03 crc kubenswrapper[4751]: I0131 15:01:03.683960 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" event={"ID":"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d","Type":"ContainerDied","Data":"4fd861ffb49593c05c4ca2dc031ea7913a88e0c31f7cbaf913eca6a5819336ff"} Jan 31 15:01:03 crc kubenswrapper[4751]: I0131 15:01:03.685961 4751 generic.go:334] "Generic (PLEG): container finished" podID="6de76201-fcd1-48a2-8bba-dcdf63bbdf20" containerID="2def03042cdbf5505276d6eb76695378d7a0c3b7b97a2d260b2bb7c00d1d66d9" exitCode=0 Jan 31 15:01:03 crc kubenswrapper[4751]: I0131 15:01:03.686028 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancebf79-account-delete-vglq2" event={"ID":"6de76201-fcd1-48a2-8bba-dcdf63bbdf20","Type":"ContainerDied","Data":"2def03042cdbf5505276d6eb76695378d7a0c3b7b97a2d260b2bb7c00d1d66d9"} Jan 31 15:01:03 crc kubenswrapper[4751]: I0131 15:01:03.686083 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancebf79-account-delete-vglq2" event={"ID":"6de76201-fcd1-48a2-8bba-dcdf63bbdf20","Type":"ContainerStarted","Data":"5bf286bf34a619105d42f4b703f7d5e3780035dd5d176dd0ce6e759bed3be659"} Jan 31 15:01:03 crc kubenswrapper[4751]: I0131 15:01:03.739368 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 15:01:03 crc kubenswrapper[4751]: I0131 15:01:03.746017 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 15:01:04 crc kubenswrapper[4751]: I0131 15:01:04.413926 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d360673b-7556-44b9-b7bd-4805810da349" path="/var/lib/kubelet/pods/d360673b-7556-44b9-b7bd-4805810da349/volumes" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.007060 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancebf79-account-delete-vglq2" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.011626 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.134457 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6de76201-fcd1-48a2-8bba-dcdf63bbdf20-operator-scripts\") pod \"6de76201-fcd1-48a2-8bba-dcdf63bbdf20\" (UID: \"6de76201-fcd1-48a2-8bba-dcdf63bbdf20\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.134596 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7mtfm\" (UniqueName: \"kubernetes.io/projected/6de76201-fcd1-48a2-8bba-dcdf63bbdf20-kube-api-access-7mtfm\") pod \"6de76201-fcd1-48a2-8bba-dcdf63bbdf20\" (UID: \"6de76201-fcd1-48a2-8bba-dcdf63bbdf20\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.134645 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-czpkb\" (UniqueName: \"kubernetes.io/projected/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-kube-api-access-czpkb\") pod \"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d\" (UID: \"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.134717 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-config-data\") pod \"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d\" (UID: \"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.134761 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-fernet-keys\") pod \"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d\" (UID: \"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.135254 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6de76201-fcd1-48a2-8bba-dcdf63bbdf20-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "6de76201-fcd1-48a2-8bba-dcdf63bbdf20" (UID: "6de76201-fcd1-48a2-8bba-dcdf63bbdf20"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.139852 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-kube-api-access-czpkb" (OuterVolumeSpecName: "kube-api-access-czpkb") pod "bce6ceb9-5b0d-4ec7-9492-94dce9bb261d" (UID: "bce6ceb9-5b0d-4ec7-9492-94dce9bb261d"). InnerVolumeSpecName "kube-api-access-czpkb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.140145 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6de76201-fcd1-48a2-8bba-dcdf63bbdf20-kube-api-access-7mtfm" (OuterVolumeSpecName: "kube-api-access-7mtfm") pod "6de76201-fcd1-48a2-8bba-dcdf63bbdf20" (UID: "6de76201-fcd1-48a2-8bba-dcdf63bbdf20"). InnerVolumeSpecName "kube-api-access-7mtfm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.151226 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "bce6ceb9-5b0d-4ec7-9492-94dce9bb261d" (UID: "bce6ceb9-5b0d-4ec7-9492-94dce9bb261d"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.180109 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-config-data" (OuterVolumeSpecName: "config-data") pod "bce6ceb9-5b0d-4ec7-9492-94dce9bb261d" (UID: "bce6ceb9-5b0d-4ec7-9492-94dce9bb261d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.236729 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7mtfm\" (UniqueName: \"kubernetes.io/projected/6de76201-fcd1-48a2-8bba-dcdf63bbdf20-kube-api-access-7mtfm\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.236769 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-czpkb\" (UniqueName: \"kubernetes.io/projected/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-kube-api-access-czpkb\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.236783 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.236797 4751 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.236811 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/6de76201-fcd1-48a2-8bba-dcdf63bbdf20-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.709843 4751 generic.go:334] "Generic (PLEG): container finished" podID="53e80c85-256f-4e3a-8338-091b69c8a111" containerID="52fb0acaeff7876fc2ee5ab2cce699867c40acf2d6cd815e7e538721bdb941cf" exitCode=0 Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.710159 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"53e80c85-256f-4e3a-8338-091b69c8a111","Type":"ContainerDied","Data":"52fb0acaeff7876fc2ee5ab2cce699867c40acf2d6cd815e7e538721bdb941cf"} Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.714728 4751 generic.go:334] "Generic (PLEG): container finished" podID="a6f236ad-2ab6-4e51-b934-402f28844e69" containerID="c1281edefeae3927db375b0c14eca77f9671b71769c05b60b41b1179bc1039fe" exitCode=0 Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.714787 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"a6f236ad-2ab6-4e51-b934-402f28844e69","Type":"ContainerDied","Data":"c1281edefeae3927db375b0c14eca77f9671b71769c05b60b41b1179bc1039fe"} Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.717176 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancebf79-account-delete-vglq2" event={"ID":"6de76201-fcd1-48a2-8bba-dcdf63bbdf20","Type":"ContainerDied","Data":"5bf286bf34a619105d42f4b703f7d5e3780035dd5d176dd0ce6e759bed3be659"} Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.717202 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5bf286bf34a619105d42f4b703f7d5e3780035dd5d176dd0ce6e759bed3be659" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.717278 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancebf79-account-delete-vglq2" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.730525 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" event={"ID":"bce6ceb9-5b0d-4ec7-9492-94dce9bb261d","Type":"ContainerDied","Data":"e746be916cd4d51c51a7b8ba5c98afa99b72d12f05c9b8ae4e86df55efe466c0"} Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.730816 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e746be916cd4d51c51a7b8ba5c98afa99b72d12f05c9b8ae4e86df55efe466c0" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.731710 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-cron-29497861-5bd6d" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.813806 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.848616 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962083 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6f236ad-2ab6-4e51-b934-402f28844e69-logs\") pod \"a6f236ad-2ab6-4e51-b934-402f28844e69\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962132 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"53e80c85-256f-4e3a-8338-091b69c8a111\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962161 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-dev\") pod \"a6f236ad-2ab6-4e51-b934-402f28844e69\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962179 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-sys\") pod \"a6f236ad-2ab6-4e51-b934-402f28844e69\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962191 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-lib-modules\") pod \"53e80c85-256f-4e3a-8338-091b69c8a111\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962213 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"53e80c85-256f-4e3a-8338-091b69c8a111\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962232 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53e80c85-256f-4e3a-8338-091b69c8a111-httpd-run\") pod \"53e80c85-256f-4e3a-8338-091b69c8a111\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962247 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-dev\") pod \"53e80c85-256f-4e3a-8338-091b69c8a111\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962263 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wzc4c\" (UniqueName: \"kubernetes.io/projected/a6f236ad-2ab6-4e51-b934-402f28844e69-kube-api-access-wzc4c\") pod \"a6f236ad-2ab6-4e51-b934-402f28844e69\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962258 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-sys" (OuterVolumeSpecName: "sys") pod "a6f236ad-2ab6-4e51-b934-402f28844e69" (UID: "a6f236ad-2ab6-4e51-b934-402f28844e69"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962286 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e80c85-256f-4e3a-8338-091b69c8a111-config-data\") pod \"53e80c85-256f-4e3a-8338-091b69c8a111\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962315 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-dev" (OuterVolumeSpecName: "dev") pod "a6f236ad-2ab6-4e51-b934-402f28844e69" (UID: "a6f236ad-2ab6-4e51-b934-402f28844e69"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962320 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-lib-modules\") pod \"a6f236ad-2ab6-4e51-b934-402f28844e69\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962337 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-var-locks-brick\") pod \"53e80c85-256f-4e3a-8338-091b69c8a111\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962349 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-etc-nvme\") pod \"a6f236ad-2ab6-4e51-b934-402f28844e69\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962370 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-etc-nvme\") pod \"53e80c85-256f-4e3a-8338-091b69c8a111\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962399 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-sys\") pod \"53e80c85-256f-4e3a-8338-091b69c8a111\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962412 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"a6f236ad-2ab6-4e51-b934-402f28844e69\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962436 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e80c85-256f-4e3a-8338-091b69c8a111-logs\") pod \"53e80c85-256f-4e3a-8338-091b69c8a111\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962453 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6f236ad-2ab6-4e51-b934-402f28844e69-config-data\") pod \"a6f236ad-2ab6-4e51-b934-402f28844e69\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962468 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-etc-iscsi\") pod \"53e80c85-256f-4e3a-8338-091b69c8a111\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962483 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e80c85-256f-4e3a-8338-091b69c8a111-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "53e80c85-256f-4e3a-8338-091b69c8a111" (UID: "53e80c85-256f-4e3a-8338-091b69c8a111"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962493 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "53e80c85-256f-4e3a-8338-091b69c8a111" (UID: "53e80c85-256f-4e3a-8338-091b69c8a111"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962500 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6f236ad-2ab6-4e51-b934-402f28844e69-logs" (OuterVolumeSpecName: "logs") pod "a6f236ad-2ab6-4e51-b934-402f28844e69" (UID: "a6f236ad-2ab6-4e51-b934-402f28844e69"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962489 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-var-locks-brick\") pod \"a6f236ad-2ab6-4e51-b934-402f28844e69\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962517 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "a6f236ad-2ab6-4e51-b934-402f28844e69" (UID: "a6f236ad-2ab6-4e51-b934-402f28844e69"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962525 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "53e80c85-256f-4e3a-8338-091b69c8a111" (UID: "53e80c85-256f-4e3a-8338-091b69c8a111"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962550 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "a6f236ad-2ab6-4e51-b934-402f28844e69" (UID: "a6f236ad-2ab6-4e51-b934-402f28844e69"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962568 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "53e80c85-256f-4e3a-8338-091b69c8a111" (UID: "53e80c85-256f-4e3a-8338-091b69c8a111"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962571 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-run\") pod \"a6f236ad-2ab6-4e51-b934-402f28844e69\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962605 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6f236ad-2ab6-4e51-b934-402f28844e69-scripts\") pod \"a6f236ad-2ab6-4e51-b934-402f28844e69\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962631 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "53e80c85-256f-4e3a-8338-091b69c8a111" (UID: "53e80c85-256f-4e3a-8338-091b69c8a111"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962640 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-etc-iscsi\") pod \"a6f236ad-2ab6-4e51-b934-402f28844e69\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962664 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-dev" (OuterVolumeSpecName: "dev") pod "53e80c85-256f-4e3a-8338-091b69c8a111" (UID: "53e80c85-256f-4e3a-8338-091b69c8a111"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962678 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53e80c85-256f-4e3a-8338-091b69c8a111-scripts\") pod \"53e80c85-256f-4e3a-8338-091b69c8a111\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962690 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-run" (OuterVolumeSpecName: "run") pod "a6f236ad-2ab6-4e51-b934-402f28844e69" (UID: "a6f236ad-2ab6-4e51-b934-402f28844e69"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962708 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p2cjr\" (UniqueName: \"kubernetes.io/projected/53e80c85-256f-4e3a-8338-091b69c8a111-kube-api-access-p2cjr\") pod \"53e80c85-256f-4e3a-8338-091b69c8a111\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962745 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"a6f236ad-2ab6-4e51-b934-402f28844e69\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962784 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-run\") pod \"53e80c85-256f-4e3a-8338-091b69c8a111\" (UID: \"53e80c85-256f-4e3a-8338-091b69c8a111\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962805 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6f236ad-2ab6-4e51-b934-402f28844e69-httpd-run\") pod \"a6f236ad-2ab6-4e51-b934-402f28844e69\" (UID: \"a6f236ad-2ab6-4e51-b934-402f28844e69\") " Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962847 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-sys" (OuterVolumeSpecName: "sys") pod "53e80c85-256f-4e3a-8338-091b69c8a111" (UID: "53e80c85-256f-4e3a-8338-091b69c8a111"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962874 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/53e80c85-256f-4e3a-8338-091b69c8a111-logs" (OuterVolumeSpecName: "logs") pod "53e80c85-256f-4e3a-8338-091b69c8a111" (UID: "53e80c85-256f-4e3a-8338-091b69c8a111"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.962904 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a6f236ad-2ab6-4e51-b934-402f28844e69" (UID: "a6f236ad-2ab6-4e51-b934-402f28844e69"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.963301 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.963314 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.963326 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.963337 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/53e80c85-256f-4e3a-8338-091b69c8a111-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.963347 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.963358 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.963367 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.963377 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.963387 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.963396 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/53e80c85-256f-4e3a-8338-091b69c8a111-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.963405 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.963415 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.963426 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a6f236ad-2ab6-4e51-b934-402f28844e69-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.963680 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a6f236ad-2ab6-4e51-b934-402f28844e69-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a6f236ad-2ab6-4e51-b934-402f28844e69" (UID: "a6f236ad-2ab6-4e51-b934-402f28844e69"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.966322 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "a6f236ad-2ab6-4e51-b934-402f28844e69" (UID: "a6f236ad-2ab6-4e51-b934-402f28844e69"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.966254 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-run" (OuterVolumeSpecName: "run") pod "53e80c85-256f-4e3a-8338-091b69c8a111" (UID: "53e80c85-256f-4e3a-8338-091b69c8a111"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.967305 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "a6f236ad-2ab6-4e51-b934-402f28844e69" (UID: "a6f236ad-2ab6-4e51-b934-402f28844e69"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.967905 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a6f236ad-2ab6-4e51-b934-402f28844e69-kube-api-access-wzc4c" (OuterVolumeSpecName: "kube-api-access-wzc4c") pod "a6f236ad-2ab6-4e51-b934-402f28844e69" (UID: "a6f236ad-2ab6-4e51-b934-402f28844e69"). InnerVolumeSpecName "kube-api-access-wzc4c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.967926 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance-cache") pod "53e80c85-256f-4e3a-8338-091b69c8a111" (UID: "53e80c85-256f-4e3a-8338-091b69c8a111"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.968009 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/53e80c85-256f-4e3a-8338-091b69c8a111-kube-api-access-p2cjr" (OuterVolumeSpecName: "kube-api-access-p2cjr") pod "53e80c85-256f-4e3a-8338-091b69c8a111" (UID: "53e80c85-256f-4e3a-8338-091b69c8a111"). InnerVolumeSpecName "kube-api-access-p2cjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.971143 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f236ad-2ab6-4e51-b934-402f28844e69-scripts" (OuterVolumeSpecName: "scripts") pod "a6f236ad-2ab6-4e51-b934-402f28844e69" (UID: "a6f236ad-2ab6-4e51-b934-402f28844e69"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.971253 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance-cache") pod "a6f236ad-2ab6-4e51-b934-402f28844e69" (UID: "a6f236ad-2ab6-4e51-b934-402f28844e69"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.972142 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53e80c85-256f-4e3a-8338-091b69c8a111-scripts" (OuterVolumeSpecName: "scripts") pod "53e80c85-256f-4e3a-8338-091b69c8a111" (UID: "53e80c85-256f-4e3a-8338-091b69c8a111"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:05 crc kubenswrapper[4751]: I0131 15:01:05.972644 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "53e80c85-256f-4e3a-8338-091b69c8a111" (UID: "53e80c85-256f-4e3a-8338-091b69c8a111"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.026609 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a6f236ad-2ab6-4e51-b934-402f28844e69-config-data" (OuterVolumeSpecName: "config-data") pod "a6f236ad-2ab6-4e51-b934-402f28844e69" (UID: "a6f236ad-2ab6-4e51-b934-402f28844e69"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.045055 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/53e80c85-256f-4e3a-8338-091b69c8a111-config-data" (OuterVolumeSpecName: "config-data") pod "53e80c85-256f-4e3a-8338-091b69c8a111" (UID: "53e80c85-256f-4e3a-8338-091b69c8a111"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.064916 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.064942 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.064957 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wzc4c\" (UniqueName: \"kubernetes.io/projected/a6f236ad-2ab6-4e51-b934-402f28844e69-kube-api-access-wzc4c\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.064973 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/53e80c85-256f-4e3a-8338-091b69c8a111-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.064985 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.065001 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.065012 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a6f236ad-2ab6-4e51-b934-402f28844e69-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.065025 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.065036 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a6f236ad-2ab6-4e51-b934-402f28844e69-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.065047 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a6f236ad-2ab6-4e51-b934-402f28844e69-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.065058 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p2cjr\" (UniqueName: \"kubernetes.io/projected/53e80c85-256f-4e3a-8338-091b69c8a111-kube-api-access-p2cjr\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.065089 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/53e80c85-256f-4e3a-8338-091b69c8a111-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.065107 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.065118 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/53e80c85-256f-4e3a-8338-091b69c8a111-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.065128 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a6f236ad-2ab6-4e51-b934-402f28844e69-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.080552 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.083867 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.084388 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.092385 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.166608 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.166662 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.166687 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.166712 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.743534 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"53e80c85-256f-4e3a-8338-091b69c8a111","Type":"ContainerDied","Data":"d0fc51af73d94f86e3cd1b0621a38ca7cd14201bdbba30a0fccb4019efc30e6f"} Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.743563 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.743991 4751 scope.go:117] "RemoveContainer" containerID="52fb0acaeff7876fc2ee5ab2cce699867c40acf2d6cd815e7e538721bdb941cf" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.746998 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"a6f236ad-2ab6-4e51-b934-402f28844e69","Type":"ContainerDied","Data":"0404afa0dee3bb2591b16ea7fdc6a0ed77a19e078e63d50f945b13286beb2ed9"} Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.747113 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.781120 4751 scope.go:117] "RemoveContainer" containerID="2470407eb06da53e051c9bcfd402a9b5b782f16d58c74eaa361abad6c79fcccd" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.787132 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.803900 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.817709 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.821694 4751 scope.go:117] "RemoveContainer" containerID="c1281edefeae3927db375b0c14eca77f9671b71769c05b60b41b1179bc1039fe" Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.828241 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:01:06 crc kubenswrapper[4751]: I0131 15:01:06.843595 4751 scope.go:117] "RemoveContainer" containerID="3d4c9658ef799ffc5bf29e9925845adf68e2321560adab813cd297c7b6dfe0e2" Jan 31 15:01:07 crc kubenswrapper[4751]: I0131 15:01:07.212419 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-b8nfw"] Jan 31 15:01:07 crc kubenswrapper[4751]: I0131 15:01:07.218944 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-b8nfw"] Jan 31 15:01:07 crc kubenswrapper[4751]: I0131 15:01:07.226083 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-bf79-account-create-update-whmk8"] Jan 31 15:01:07 crc kubenswrapper[4751]: I0131 15:01:07.232879 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glancebf79-account-delete-vglq2"] Jan 31 15:01:07 crc kubenswrapper[4751]: I0131 15:01:07.238650 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glancebf79-account-delete-vglq2"] Jan 31 15:01:07 crc kubenswrapper[4751]: I0131 15:01:07.244943 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-bf79-account-create-update-whmk8"] Jan 31 15:01:08 crc kubenswrapper[4751]: I0131 15:01:08.413642 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa" path="/var/lib/kubelet/pods/4c9ad1c0-9bb7-4d3e-8e68-8310292d89fa/volumes" Jan 31 15:01:08 crc kubenswrapper[4751]: I0131 15:01:08.414748 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="53e80c85-256f-4e3a-8338-091b69c8a111" path="/var/lib/kubelet/pods/53e80c85-256f-4e3a-8338-091b69c8a111/volumes" Jan 31 15:01:08 crc kubenswrapper[4751]: I0131 15:01:08.415562 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6de76201-fcd1-48a2-8bba-dcdf63bbdf20" path="/var/lib/kubelet/pods/6de76201-fcd1-48a2-8bba-dcdf63bbdf20/volumes" Jan 31 15:01:08 crc kubenswrapper[4751]: I0131 15:01:08.416844 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="892f8632-f7d8-46b0-a39a-4a84f5e3a2aa" path="/var/lib/kubelet/pods/892f8632-f7d8-46b0-a39a-4a84f5e3a2aa/volumes" Jan 31 15:01:08 crc kubenswrapper[4751]: I0131 15:01:08.417723 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a6f236ad-2ab6-4e51-b934-402f28844e69" path="/var/lib/kubelet/pods/a6f236ad-2ab6-4e51-b934-402f28844e69/volumes" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.268787 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9"] Jan 31 15:01:09 crc kubenswrapper[4751]: E0131 15:01:09.269483 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f236ad-2ab6-4e51-b934-402f28844e69" containerName="glance-httpd" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.269500 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f236ad-2ab6-4e51-b934-402f28844e69" containerName="glance-httpd" Jan 31 15:01:09 crc kubenswrapper[4751]: E0131 15:01:09.269515 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6de76201-fcd1-48a2-8bba-dcdf63bbdf20" containerName="mariadb-account-delete" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.269523 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6de76201-fcd1-48a2-8bba-dcdf63bbdf20" containerName="mariadb-account-delete" Jan 31 15:01:09 crc kubenswrapper[4751]: E0131 15:01:09.269546 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a6f236ad-2ab6-4e51-b934-402f28844e69" containerName="glance-log" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.269552 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a6f236ad-2ab6-4e51-b934-402f28844e69" containerName="glance-log" Jan 31 15:01:09 crc kubenswrapper[4751]: E0131 15:01:09.269561 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e80c85-256f-4e3a-8338-091b69c8a111" containerName="glance-log" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.269567 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e80c85-256f-4e3a-8338-091b69c8a111" containerName="glance-log" Jan 31 15:01:09 crc kubenswrapper[4751]: E0131 15:01:09.269582 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bce6ceb9-5b0d-4ec7-9492-94dce9bb261d" containerName="keystone-cron" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.269588 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="bce6ceb9-5b0d-4ec7-9492-94dce9bb261d" containerName="keystone-cron" Jan 31 15:01:09 crc kubenswrapper[4751]: E0131 15:01:09.269599 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="53e80c85-256f-4e3a-8338-091b69c8a111" containerName="glance-httpd" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.269605 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="53e80c85-256f-4e3a-8338-091b69c8a111" containerName="glance-httpd" Jan 31 15:01:09 crc kubenswrapper[4751]: E0131 15:01:09.269614 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d360673b-7556-44b9-b7bd-4805810da349" containerName="openstackclient" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.269633 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d360673b-7556-44b9-b7bd-4805810da349" containerName="openstackclient" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.269807 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="bce6ceb9-5b0d-4ec7-9492-94dce9bb261d" containerName="keystone-cron" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.269828 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e80c85-256f-4e3a-8338-091b69c8a111" containerName="glance-httpd" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.269838 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d360673b-7556-44b9-b7bd-4805810da349" containerName="openstackclient" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.269847 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f236ad-2ab6-4e51-b934-402f28844e69" containerName="glance-httpd" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.269858 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="53e80c85-256f-4e3a-8338-091b69c8a111" containerName="glance-log" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.269869 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a6f236ad-2ab6-4e51-b934-402f28844e69" containerName="glance-log" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.269879 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6de76201-fcd1-48a2-8bba-dcdf63bbdf20" containerName="mariadb-account-delete" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.270517 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.274022 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.283428 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-fm54m"] Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.284381 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-fm54m" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.291539 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-fm54m"] Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.297945 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9"] Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.416277 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31decdae-8d23-4756-b743-4cd4f7709654-operator-scripts\") pod \"glance-db-create-fm54m\" (UID: \"31decdae-8d23-4756-b743-4cd4f7709654\") " pod="glance-kuttl-tests/glance-db-create-fm54m" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.416352 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7cb65\" (UniqueName: \"kubernetes.io/projected/b491fa19-1dde-4e28-919f-f120c0c772b7-kube-api-access-7cb65\") pod \"glance-bd3b-account-create-update-nbdd9\" (UID: \"b491fa19-1dde-4e28-919f-f120c0c772b7\") " pod="glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.416406 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b491fa19-1dde-4e28-919f-f120c0c772b7-operator-scripts\") pod \"glance-bd3b-account-create-update-nbdd9\" (UID: \"b491fa19-1dde-4e28-919f-f120c0c772b7\") " pod="glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.416462 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wjwvz\" (UniqueName: \"kubernetes.io/projected/31decdae-8d23-4756-b743-4cd4f7709654-kube-api-access-wjwvz\") pod \"glance-db-create-fm54m\" (UID: \"31decdae-8d23-4756-b743-4cd4f7709654\") " pod="glance-kuttl-tests/glance-db-create-fm54m" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.518351 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31decdae-8d23-4756-b743-4cd4f7709654-operator-scripts\") pod \"glance-db-create-fm54m\" (UID: \"31decdae-8d23-4756-b743-4cd4f7709654\") " pod="glance-kuttl-tests/glance-db-create-fm54m" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.518446 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7cb65\" (UniqueName: \"kubernetes.io/projected/b491fa19-1dde-4e28-919f-f120c0c772b7-kube-api-access-7cb65\") pod \"glance-bd3b-account-create-update-nbdd9\" (UID: \"b491fa19-1dde-4e28-919f-f120c0c772b7\") " pod="glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.518521 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b491fa19-1dde-4e28-919f-f120c0c772b7-operator-scripts\") pod \"glance-bd3b-account-create-update-nbdd9\" (UID: \"b491fa19-1dde-4e28-919f-f120c0c772b7\") " pod="glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.518598 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wjwvz\" (UniqueName: \"kubernetes.io/projected/31decdae-8d23-4756-b743-4cd4f7709654-kube-api-access-wjwvz\") pod \"glance-db-create-fm54m\" (UID: \"31decdae-8d23-4756-b743-4cd4f7709654\") " pod="glance-kuttl-tests/glance-db-create-fm54m" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.519063 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31decdae-8d23-4756-b743-4cd4f7709654-operator-scripts\") pod \"glance-db-create-fm54m\" (UID: \"31decdae-8d23-4756-b743-4cd4f7709654\") " pod="glance-kuttl-tests/glance-db-create-fm54m" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.519719 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b491fa19-1dde-4e28-919f-f120c0c772b7-operator-scripts\") pod \"glance-bd3b-account-create-update-nbdd9\" (UID: \"b491fa19-1dde-4e28-919f-f120c0c772b7\") " pod="glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.537639 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7cb65\" (UniqueName: \"kubernetes.io/projected/b491fa19-1dde-4e28-919f-f120c0c772b7-kube-api-access-7cb65\") pod \"glance-bd3b-account-create-update-nbdd9\" (UID: \"b491fa19-1dde-4e28-919f-f120c0c772b7\") " pod="glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.538788 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wjwvz\" (UniqueName: \"kubernetes.io/projected/31decdae-8d23-4756-b743-4cd4f7709654-kube-api-access-wjwvz\") pod \"glance-db-create-fm54m\" (UID: \"31decdae-8d23-4756-b743-4cd4f7709654\") " pod="glance-kuttl-tests/glance-db-create-fm54m" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.586665 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.606726 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-fm54m" Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.830802 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9"] Jan 31 15:01:09 crc kubenswrapper[4751]: I0131 15:01:09.887637 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-fm54m"] Jan 31 15:01:09 crc kubenswrapper[4751]: W0131 15:01:09.896496 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod31decdae_8d23_4756_b743_4cd4f7709654.slice/crio-7aa6089965c3133848800a7608f1a364a30fa95ef92c6adc517359800a60e4e1 WatchSource:0}: Error finding container 7aa6089965c3133848800a7608f1a364a30fa95ef92c6adc517359800a60e4e1: Status 404 returned error can't find the container with id 7aa6089965c3133848800a7608f1a364a30fa95ef92c6adc517359800a60e4e1 Jan 31 15:01:10 crc kubenswrapper[4751]: I0131 15:01:10.803087 4751 generic.go:334] "Generic (PLEG): container finished" podID="31decdae-8d23-4756-b743-4cd4f7709654" containerID="7e789eeabd8afc4f9d1d5096f902a1d03746cbe8acdf7df1c1fc6d2741b5975c" exitCode=0 Jan 31 15:01:10 crc kubenswrapper[4751]: I0131 15:01:10.803133 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-fm54m" event={"ID":"31decdae-8d23-4756-b743-4cd4f7709654","Type":"ContainerDied","Data":"7e789eeabd8afc4f9d1d5096f902a1d03746cbe8acdf7df1c1fc6d2741b5975c"} Jan 31 15:01:10 crc kubenswrapper[4751]: I0131 15:01:10.803403 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-fm54m" event={"ID":"31decdae-8d23-4756-b743-4cd4f7709654","Type":"ContainerStarted","Data":"7aa6089965c3133848800a7608f1a364a30fa95ef92c6adc517359800a60e4e1"} Jan 31 15:01:10 crc kubenswrapper[4751]: I0131 15:01:10.804650 4751 generic.go:334] "Generic (PLEG): container finished" podID="b491fa19-1dde-4e28-919f-f120c0c772b7" containerID="1b08739497c3b40bf4675eac8a3f77cfbe93709c363b0f7d316a1a53ab0f3eab" exitCode=0 Jan 31 15:01:10 crc kubenswrapper[4751]: I0131 15:01:10.804694 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9" event={"ID":"b491fa19-1dde-4e28-919f-f120c0c772b7","Type":"ContainerDied","Data":"1b08739497c3b40bf4675eac8a3f77cfbe93709c363b0f7d316a1a53ab0f3eab"} Jan 31 15:01:10 crc kubenswrapper[4751]: I0131 15:01:10.804710 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9" event={"ID":"b491fa19-1dde-4e28-919f-f120c0c772b7","Type":"ContainerStarted","Data":"05b4f104de7d71eab37b450445806c311fba7d5643451d20c4dfecb872c69cf1"} Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.116385 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9" Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.121280 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-fm54m" Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.257251 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b491fa19-1dde-4e28-919f-f120c0c772b7-operator-scripts\") pod \"b491fa19-1dde-4e28-919f-f120c0c772b7\" (UID: \"b491fa19-1dde-4e28-919f-f120c0c772b7\") " Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.257324 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7cb65\" (UniqueName: \"kubernetes.io/projected/b491fa19-1dde-4e28-919f-f120c0c772b7-kube-api-access-7cb65\") pod \"b491fa19-1dde-4e28-919f-f120c0c772b7\" (UID: \"b491fa19-1dde-4e28-919f-f120c0c772b7\") " Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.257348 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31decdae-8d23-4756-b743-4cd4f7709654-operator-scripts\") pod \"31decdae-8d23-4756-b743-4cd4f7709654\" (UID: \"31decdae-8d23-4756-b743-4cd4f7709654\") " Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.257414 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wjwvz\" (UniqueName: \"kubernetes.io/projected/31decdae-8d23-4756-b743-4cd4f7709654-kube-api-access-wjwvz\") pod \"31decdae-8d23-4756-b743-4cd4f7709654\" (UID: \"31decdae-8d23-4756-b743-4cd4f7709654\") " Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.258018 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31decdae-8d23-4756-b743-4cd4f7709654-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31decdae-8d23-4756-b743-4cd4f7709654" (UID: "31decdae-8d23-4756-b743-4cd4f7709654"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.258372 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b491fa19-1dde-4e28-919f-f120c0c772b7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "b491fa19-1dde-4e28-919f-f120c0c772b7" (UID: "b491fa19-1dde-4e28-919f-f120c0c772b7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.258441 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/b491fa19-1dde-4e28-919f-f120c0c772b7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.258457 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31decdae-8d23-4756-b743-4cd4f7709654-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.262743 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31decdae-8d23-4756-b743-4cd4f7709654-kube-api-access-wjwvz" (OuterVolumeSpecName: "kube-api-access-wjwvz") pod "31decdae-8d23-4756-b743-4cd4f7709654" (UID: "31decdae-8d23-4756-b743-4cd4f7709654"). InnerVolumeSpecName "kube-api-access-wjwvz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.263588 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b491fa19-1dde-4e28-919f-f120c0c772b7-kube-api-access-7cb65" (OuterVolumeSpecName: "kube-api-access-7cb65") pod "b491fa19-1dde-4e28-919f-f120c0c772b7" (UID: "b491fa19-1dde-4e28-919f-f120c0c772b7"). InnerVolumeSpecName "kube-api-access-7cb65". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.360208 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7cb65\" (UniqueName: \"kubernetes.io/projected/b491fa19-1dde-4e28-919f-f120c0c772b7-kube-api-access-7cb65\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.360270 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wjwvz\" (UniqueName: \"kubernetes.io/projected/31decdae-8d23-4756-b743-4cd4f7709654-kube-api-access-wjwvz\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.821176 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9" event={"ID":"b491fa19-1dde-4e28-919f-f120c0c772b7","Type":"ContainerDied","Data":"05b4f104de7d71eab37b450445806c311fba7d5643451d20c4dfecb872c69cf1"} Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.821244 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="05b4f104de7d71eab37b450445806c311fba7d5643451d20c4dfecb872c69cf1" Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.821330 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9" Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.823827 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-fm54m" event={"ID":"31decdae-8d23-4756-b743-4cd4f7709654","Type":"ContainerDied","Data":"7aa6089965c3133848800a7608f1a364a30fa95ef92c6adc517359800a60e4e1"} Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.823883 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7aa6089965c3133848800a7608f1a364a30fa95ef92c6adc517359800a60e4e1" Jan 31 15:01:12 crc kubenswrapper[4751]: I0131 15:01:12.823937 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-fm54m" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.481553 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-qslfl"] Jan 31 15:01:14 crc kubenswrapper[4751]: E0131 15:01:14.482037 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b491fa19-1dde-4e28-919f-f120c0c772b7" containerName="mariadb-account-create-update" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.482050 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b491fa19-1dde-4e28-919f-f120c0c772b7" containerName="mariadb-account-create-update" Jan 31 15:01:14 crc kubenswrapper[4751]: E0131 15:01:14.482089 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31decdae-8d23-4756-b743-4cd4f7709654" containerName="mariadb-database-create" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.482095 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="31decdae-8d23-4756-b743-4cd4f7709654" containerName="mariadb-database-create" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.482235 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b491fa19-1dde-4e28-919f-f120c0c772b7" containerName="mariadb-account-create-update" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.482247 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="31decdae-8d23-4756-b743-4cd4f7709654" containerName="mariadb-database-create" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.482663 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.485311 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-2pnvw" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.485713 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"combined-ca-bundle" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.486336 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.495195 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-qslfl"] Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.598405 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7wvr\" (UniqueName: \"kubernetes.io/projected/ec8366b9-bf19-46a4-9033-a05dabe579a4-kube-api-access-n7wvr\") pod \"glance-db-sync-qslfl\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.598470 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-config-data\") pod \"glance-db-sync-qslfl\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.598497 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-combined-ca-bundle\") pod \"glance-db-sync-qslfl\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.598576 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-db-sync-config-data\") pod \"glance-db-sync-qslfl\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.700606 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n7wvr\" (UniqueName: \"kubernetes.io/projected/ec8366b9-bf19-46a4-9033-a05dabe579a4-kube-api-access-n7wvr\") pod \"glance-db-sync-qslfl\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.701016 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-config-data\") pod \"glance-db-sync-qslfl\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.701310 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-combined-ca-bundle\") pod \"glance-db-sync-qslfl\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.701618 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-db-sync-config-data\") pod \"glance-db-sync-qslfl\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.708602 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-config-data\") pod \"glance-db-sync-qslfl\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.709029 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-combined-ca-bundle\") pod \"glance-db-sync-qslfl\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.716785 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-db-sync-config-data\") pod \"glance-db-sync-qslfl\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.732861 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7wvr\" (UniqueName: \"kubernetes.io/projected/ec8366b9-bf19-46a4-9033-a05dabe579a4-kube-api-access-n7wvr\") pod \"glance-db-sync-qslfl\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:14 crc kubenswrapper[4751]: I0131 15:01:14.812264 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:15 crc kubenswrapper[4751]: I0131 15:01:15.300642 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-qslfl"] Jan 31 15:01:15 crc kubenswrapper[4751]: I0131 15:01:15.851641 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-qslfl" event={"ID":"ec8366b9-bf19-46a4-9033-a05dabe579a4","Type":"ContainerStarted","Data":"f2d3ac70f8ddad94f9d969f2045d3e2ecc9acc9f7ef1ceb69fe7a6e69910af4e"} Jan 31 15:01:15 crc kubenswrapper[4751]: I0131 15:01:15.852009 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-qslfl" event={"ID":"ec8366b9-bf19-46a4-9033-a05dabe579a4","Type":"ContainerStarted","Data":"28ee5450b21d710d3178a37262a40b9ef50fcb2817d95afa9be7d36b03349a2b"} Jan 31 15:01:15 crc kubenswrapper[4751]: I0131 15:01:15.870395 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-qslfl" podStartSLOduration=1.870372776 podStartE2EDuration="1.870372776s" podCreationTimestamp="2026-01-31 15:01:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:15.865492177 +0000 UTC m=+1180.240205062" watchObservedRunningTime="2026-01-31 15:01:15.870372776 +0000 UTC m=+1180.245085681" Jan 31 15:01:18 crc kubenswrapper[4751]: I0131 15:01:18.881605 4751 generic.go:334] "Generic (PLEG): container finished" podID="ec8366b9-bf19-46a4-9033-a05dabe579a4" containerID="f2d3ac70f8ddad94f9d969f2045d3e2ecc9acc9f7ef1ceb69fe7a6e69910af4e" exitCode=0 Jan 31 15:01:18 crc kubenswrapper[4751]: I0131 15:01:18.881741 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-qslfl" event={"ID":"ec8366b9-bf19-46a4-9033-a05dabe579a4","Type":"ContainerDied","Data":"f2d3ac70f8ddad94f9d969f2045d3e2ecc9acc9f7ef1ceb69fe7a6e69910af4e"} Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.211574 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.289857 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-combined-ca-bundle\") pod \"ec8366b9-bf19-46a4-9033-a05dabe579a4\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.290469 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-db-sync-config-data\") pod \"ec8366b9-bf19-46a4-9033-a05dabe579a4\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.290498 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-config-data\") pod \"ec8366b9-bf19-46a4-9033-a05dabe579a4\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.290535 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n7wvr\" (UniqueName: \"kubernetes.io/projected/ec8366b9-bf19-46a4-9033-a05dabe579a4-kube-api-access-n7wvr\") pod \"ec8366b9-bf19-46a4-9033-a05dabe579a4\" (UID: \"ec8366b9-bf19-46a4-9033-a05dabe579a4\") " Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.306483 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "ec8366b9-bf19-46a4-9033-a05dabe579a4" (UID: "ec8366b9-bf19-46a4-9033-a05dabe579a4"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.318612 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ec8366b9-bf19-46a4-9033-a05dabe579a4-kube-api-access-n7wvr" (OuterVolumeSpecName: "kube-api-access-n7wvr") pod "ec8366b9-bf19-46a4-9033-a05dabe579a4" (UID: "ec8366b9-bf19-46a4-9033-a05dabe579a4"). InnerVolumeSpecName "kube-api-access-n7wvr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.336282 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "ec8366b9-bf19-46a4-9033-a05dabe579a4" (UID: "ec8366b9-bf19-46a4-9033-a05dabe579a4"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.367243 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-config-data" (OuterVolumeSpecName: "config-data") pod "ec8366b9-bf19-46a4-9033-a05dabe579a4" (UID: "ec8366b9-bf19-46a4-9033-a05dabe579a4"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.392580 4751 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.392623 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.392636 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n7wvr\" (UniqueName: \"kubernetes.io/projected/ec8366b9-bf19-46a4-9033-a05dabe579a4-kube-api-access-n7wvr\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.392651 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/ec8366b9-bf19-46a4-9033-a05dabe579a4-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.903592 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-qslfl" event={"ID":"ec8366b9-bf19-46a4-9033-a05dabe579a4","Type":"ContainerDied","Data":"28ee5450b21d710d3178a37262a40b9ef50fcb2817d95afa9be7d36b03349a2b"} Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.903629 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="28ee5450b21d710d3178a37262a40b9ef50fcb2817d95afa9be7d36b03349a2b" Jan 31 15:01:20 crc kubenswrapper[4751]: I0131 15:01:20.903651 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-qslfl" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.181385 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:01:22 crc kubenswrapper[4751]: E0131 15:01:22.182637 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ec8366b9-bf19-46a4-9033-a05dabe579a4" containerName="glance-db-sync" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.182767 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ec8366b9-bf19-46a4-9033-a05dabe579a4" containerName="glance-db-sync" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.183023 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ec8366b9-bf19-46a4-9033-a05dabe579a4" containerName="glance-db-sync" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.183993 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.187006 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.187370 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"combined-ca-bundle" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.187733 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.188089 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"cert-glance-default-public-svc" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.188253 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-2pnvw" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.188417 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"cert-glance-default-internal-svc" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.205317 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.318945 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.319309 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfmdr\" (UniqueName: \"kubernetes.io/projected/255bf0e7-10e4-4d84-8607-14c83ac28044-kube-api-access-bfmdr\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.319447 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/255bf0e7-10e4-4d84-8607-14c83ac28044-logs\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.319604 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-scripts\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.319729 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.319840 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/255bf0e7-10e4-4d84-8607-14c83ac28044-httpd-run\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.319946 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.320089 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.320235 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-config-data\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.421638 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.421702 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfmdr\" (UniqueName: \"kubernetes.io/projected/255bf0e7-10e4-4d84-8607-14c83ac28044-kube-api-access-bfmdr\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.421738 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/255bf0e7-10e4-4d84-8607-14c83ac28044-logs\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.421770 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-scripts\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.421792 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.421811 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/255bf0e7-10e4-4d84-8607-14c83ac28044-httpd-run\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.421837 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.421863 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.421883 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-config-data\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.422490 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/255bf0e7-10e4-4d84-8607-14c83ac28044-logs\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.422632 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/255bf0e7-10e4-4d84-8607-14c83ac28044-httpd-run\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.422862 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.426305 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-internal-tls-certs\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.426802 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-public-tls-certs\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.426846 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-scripts\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.427563 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-combined-ca-bundle\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.428337 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-config-data\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.440895 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfmdr\" (UniqueName: \"kubernetes.io/projected/255bf0e7-10e4-4d84-8607-14c83ac28044-kube-api-access-bfmdr\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.448670 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-single-0\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:22 crc kubenswrapper[4751]: I0131 15:01:22.499291 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:23 crc kubenswrapper[4751]: I0131 15:01:23.307314 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:01:23 crc kubenswrapper[4751]: I0131 15:01:23.929736 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"255bf0e7-10e4-4d84-8607-14c83ac28044","Type":"ContainerStarted","Data":"ab0fad05949e27dc661aeaec62db4c80b70388679d98c799d711588e4b30d20c"} Jan 31 15:01:23 crc kubenswrapper[4751]: I0131 15:01:23.929988 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"255bf0e7-10e4-4d84-8607-14c83ac28044","Type":"ContainerStarted","Data":"f8952d07c6de0f133fd84db9683a5d98d942ed60617f64795edab9f81a8ffdfb"} Jan 31 15:01:24 crc kubenswrapper[4751]: I0131 15:01:24.940332 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"255bf0e7-10e4-4d84-8607-14c83ac28044","Type":"ContainerStarted","Data":"a08737138511460d108c2e58fd850d74f67395f3f43148d743fdd1308994567c"} Jan 31 15:01:24 crc kubenswrapper[4751]: I0131 15:01:24.967476 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=2.967461269 podStartE2EDuration="2.967461269s" podCreationTimestamp="2026-01-31 15:01:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:24.964709486 +0000 UTC m=+1189.339422431" watchObservedRunningTime="2026-01-31 15:01:24.967461269 +0000 UTC m=+1189.342174154" Jan 31 15:01:32 crc kubenswrapper[4751]: I0131 15:01:32.500417 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:32 crc kubenswrapper[4751]: I0131 15:01:32.500873 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:32 crc kubenswrapper[4751]: I0131 15:01:32.539342 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:32 crc kubenswrapper[4751]: I0131 15:01:32.548896 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:33 crc kubenswrapper[4751]: I0131 15:01:33.006120 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:33 crc kubenswrapper[4751]: I0131 15:01:33.006153 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:34 crc kubenswrapper[4751]: I0131 15:01:34.957111 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:34 crc kubenswrapper[4751]: I0131 15:01:34.962250 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:36 crc kubenswrapper[4751]: I0131 15:01:36.467832 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-qslfl"] Jan 31 15:01:36 crc kubenswrapper[4751]: I0131 15:01:36.475783 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-qslfl"] Jan 31 15:01:36 crc kubenswrapper[4751]: I0131 15:01:36.510651 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glancebd3b-account-delete-wcqq6"] Jan 31 15:01:36 crc kubenswrapper[4751]: I0131 15:01:36.511611 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancebd3b-account-delete-wcqq6" Jan 31 15:01:36 crc kubenswrapper[4751]: I0131 15:01:36.523726 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancebd3b-account-delete-wcqq6"] Jan 31 15:01:36 crc kubenswrapper[4751]: I0131 15:01:36.554167 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdtdl\" (UniqueName: \"kubernetes.io/projected/3b7181ac-f336-4658-bffc-63553f8972d9-kube-api-access-xdtdl\") pod \"glancebd3b-account-delete-wcqq6\" (UID: \"3b7181ac-f336-4658-bffc-63553f8972d9\") " pod="glance-kuttl-tests/glancebd3b-account-delete-wcqq6" Jan 31 15:01:36 crc kubenswrapper[4751]: I0131 15:01:36.554285 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b7181ac-f336-4658-bffc-63553f8972d9-operator-scripts\") pod \"glancebd3b-account-delete-wcqq6\" (UID: \"3b7181ac-f336-4658-bffc-63553f8972d9\") " pod="glance-kuttl-tests/glancebd3b-account-delete-wcqq6" Jan 31 15:01:36 crc kubenswrapper[4751]: I0131 15:01:36.569024 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:01:36 crc kubenswrapper[4751]: I0131 15:01:36.655748 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xdtdl\" (UniqueName: \"kubernetes.io/projected/3b7181ac-f336-4658-bffc-63553f8972d9-kube-api-access-xdtdl\") pod \"glancebd3b-account-delete-wcqq6\" (UID: \"3b7181ac-f336-4658-bffc-63553f8972d9\") " pod="glance-kuttl-tests/glancebd3b-account-delete-wcqq6" Jan 31 15:01:36 crc kubenswrapper[4751]: I0131 15:01:36.655821 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b7181ac-f336-4658-bffc-63553f8972d9-operator-scripts\") pod \"glancebd3b-account-delete-wcqq6\" (UID: \"3b7181ac-f336-4658-bffc-63553f8972d9\") " pod="glance-kuttl-tests/glancebd3b-account-delete-wcqq6" Jan 31 15:01:36 crc kubenswrapper[4751]: I0131 15:01:36.656684 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b7181ac-f336-4658-bffc-63553f8972d9-operator-scripts\") pod \"glancebd3b-account-delete-wcqq6\" (UID: \"3b7181ac-f336-4658-bffc-63553f8972d9\") " pod="glance-kuttl-tests/glancebd3b-account-delete-wcqq6" Jan 31 15:01:36 crc kubenswrapper[4751]: I0131 15:01:36.673201 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdtdl\" (UniqueName: \"kubernetes.io/projected/3b7181ac-f336-4658-bffc-63553f8972d9-kube-api-access-xdtdl\") pod \"glancebd3b-account-delete-wcqq6\" (UID: \"3b7181ac-f336-4658-bffc-63553f8972d9\") " pod="glance-kuttl-tests/glancebd3b-account-delete-wcqq6" Jan 31 15:01:36 crc kubenswrapper[4751]: I0131 15:01:36.836853 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancebd3b-account-delete-wcqq6" Jan 31 15:01:37 crc kubenswrapper[4751]: I0131 15:01:37.051912 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="255bf0e7-10e4-4d84-8607-14c83ac28044" containerName="glance-log" containerID="cri-o://ab0fad05949e27dc661aeaec62db4c80b70388679d98c799d711588e4b30d20c" gracePeriod=30 Jan 31 15:01:37 crc kubenswrapper[4751]: I0131 15:01:37.052007 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="255bf0e7-10e4-4d84-8607-14c83ac28044" containerName="glance-httpd" containerID="cri-o://a08737138511460d108c2e58fd850d74f67395f3f43148d743fdd1308994567c" gracePeriod=30 Jan 31 15:01:37 crc kubenswrapper[4751]: I0131 15:01:37.057120 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="255bf0e7-10e4-4d84-8607-14c83ac28044" containerName="glance-log" probeResult="failure" output="Get \"https://10.217.0.110:9292/healthcheck\": EOF" Jan 31 15:01:37 crc kubenswrapper[4751]: I0131 15:01:37.058810 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-0" podUID="255bf0e7-10e4-4d84-8607-14c83ac28044" containerName="glance-httpd" probeResult="failure" output="Get \"https://10.217.0.110:9292/healthcheck\": EOF" Jan 31 15:01:37 crc kubenswrapper[4751]: I0131 15:01:37.257712 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancebd3b-account-delete-wcqq6"] Jan 31 15:01:38 crc kubenswrapper[4751]: I0131 15:01:38.062824 4751 generic.go:334] "Generic (PLEG): container finished" podID="255bf0e7-10e4-4d84-8607-14c83ac28044" containerID="ab0fad05949e27dc661aeaec62db4c80b70388679d98c799d711588e4b30d20c" exitCode=143 Jan 31 15:01:38 crc kubenswrapper[4751]: I0131 15:01:38.062920 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"255bf0e7-10e4-4d84-8607-14c83ac28044","Type":"ContainerDied","Data":"ab0fad05949e27dc661aeaec62db4c80b70388679d98c799d711588e4b30d20c"} Jan 31 15:01:38 crc kubenswrapper[4751]: I0131 15:01:38.065256 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b7181ac-f336-4658-bffc-63553f8972d9" containerID="5da73e1408c3942c575e820ab3bbf5f7e673d6aadac72064d98cb22aab529aa9" exitCode=0 Jan 31 15:01:38 crc kubenswrapper[4751]: I0131 15:01:38.065294 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancebd3b-account-delete-wcqq6" event={"ID":"3b7181ac-f336-4658-bffc-63553f8972d9","Type":"ContainerDied","Data":"5da73e1408c3942c575e820ab3bbf5f7e673d6aadac72064d98cb22aab529aa9"} Jan 31 15:01:38 crc kubenswrapper[4751]: I0131 15:01:38.065345 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancebd3b-account-delete-wcqq6" event={"ID":"3b7181ac-f336-4658-bffc-63553f8972d9","Type":"ContainerStarted","Data":"45627c6bb5b845b7afa1eefcb62304dc9bb91b2ca087df0c88c11323e14f29b4"} Jan 31 15:01:38 crc kubenswrapper[4751]: I0131 15:01:38.414417 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ec8366b9-bf19-46a4-9033-a05dabe579a4" path="/var/lib/kubelet/pods/ec8366b9-bf19-46a4-9033-a05dabe579a4/volumes" Jan 31 15:01:38 crc kubenswrapper[4751]: I0131 15:01:38.896761 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:01:38 crc kubenswrapper[4751]: I0131 15:01:38.897086 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:01:39 crc kubenswrapper[4751]: I0131 15:01:39.375956 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancebd3b-account-delete-wcqq6" Jan 31 15:01:39 crc kubenswrapper[4751]: I0131 15:01:39.493596 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b7181ac-f336-4658-bffc-63553f8972d9-operator-scripts\") pod \"3b7181ac-f336-4658-bffc-63553f8972d9\" (UID: \"3b7181ac-f336-4658-bffc-63553f8972d9\") " Jan 31 15:01:39 crc kubenswrapper[4751]: I0131 15:01:39.493644 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xdtdl\" (UniqueName: \"kubernetes.io/projected/3b7181ac-f336-4658-bffc-63553f8972d9-kube-api-access-xdtdl\") pod \"3b7181ac-f336-4658-bffc-63553f8972d9\" (UID: \"3b7181ac-f336-4658-bffc-63553f8972d9\") " Jan 31 15:01:39 crc kubenswrapper[4751]: I0131 15:01:39.494247 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3b7181ac-f336-4658-bffc-63553f8972d9-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3b7181ac-f336-4658-bffc-63553f8972d9" (UID: "3b7181ac-f336-4658-bffc-63553f8972d9"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:01:39 crc kubenswrapper[4751]: I0131 15:01:39.498266 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b7181ac-f336-4658-bffc-63553f8972d9-kube-api-access-xdtdl" (OuterVolumeSpecName: "kube-api-access-xdtdl") pod "3b7181ac-f336-4658-bffc-63553f8972d9" (UID: "3b7181ac-f336-4658-bffc-63553f8972d9"). InnerVolumeSpecName "kube-api-access-xdtdl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:39 crc kubenswrapper[4751]: I0131 15:01:39.595062 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3b7181ac-f336-4658-bffc-63553f8972d9-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:39 crc kubenswrapper[4751]: I0131 15:01:39.595113 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xdtdl\" (UniqueName: \"kubernetes.io/projected/3b7181ac-f336-4658-bffc-63553f8972d9-kube-api-access-xdtdl\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.083356 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancebd3b-account-delete-wcqq6" event={"ID":"3b7181ac-f336-4658-bffc-63553f8972d9","Type":"ContainerDied","Data":"45627c6bb5b845b7afa1eefcb62304dc9bb91b2ca087df0c88c11323e14f29b4"} Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.083410 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="45627c6bb5b845b7afa1eefcb62304dc9bb91b2ca087df0c88c11323e14f29b4" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.083545 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancebd3b-account-delete-wcqq6" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.554432 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.607890 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bfmdr\" (UniqueName: \"kubernetes.io/projected/255bf0e7-10e4-4d84-8607-14c83ac28044-kube-api-access-bfmdr\") pod \"255bf0e7-10e4-4d84-8607-14c83ac28044\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.607942 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-internal-tls-certs\") pod \"255bf0e7-10e4-4d84-8607-14c83ac28044\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.608024 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-config-data\") pod \"255bf0e7-10e4-4d84-8607-14c83ac28044\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.608318 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-combined-ca-bundle\") pod \"255bf0e7-10e4-4d84-8607-14c83ac28044\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.608362 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-public-tls-certs\") pod \"255bf0e7-10e4-4d84-8607-14c83ac28044\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.608432 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"255bf0e7-10e4-4d84-8607-14c83ac28044\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.608457 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-scripts\") pod \"255bf0e7-10e4-4d84-8607-14c83ac28044\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.608490 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/255bf0e7-10e4-4d84-8607-14c83ac28044-httpd-run\") pod \"255bf0e7-10e4-4d84-8607-14c83ac28044\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.608544 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/255bf0e7-10e4-4d84-8607-14c83ac28044-logs\") pod \"255bf0e7-10e4-4d84-8607-14c83ac28044\" (UID: \"255bf0e7-10e4-4d84-8607-14c83ac28044\") " Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.609484 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/255bf0e7-10e4-4d84-8607-14c83ac28044-logs" (OuterVolumeSpecName: "logs") pod "255bf0e7-10e4-4d84-8607-14c83ac28044" (UID: "255bf0e7-10e4-4d84-8607-14c83ac28044"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.609665 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/255bf0e7-10e4-4d84-8607-14c83ac28044-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "255bf0e7-10e4-4d84-8607-14c83ac28044" (UID: "255bf0e7-10e4-4d84-8607-14c83ac28044"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.614253 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-scripts" (OuterVolumeSpecName: "scripts") pod "255bf0e7-10e4-4d84-8607-14c83ac28044" (UID: "255bf0e7-10e4-4d84-8607-14c83ac28044"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.614252 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/255bf0e7-10e4-4d84-8607-14c83ac28044-kube-api-access-bfmdr" (OuterVolumeSpecName: "kube-api-access-bfmdr") pod "255bf0e7-10e4-4d84-8607-14c83ac28044" (UID: "255bf0e7-10e4-4d84-8607-14c83ac28044"). InnerVolumeSpecName "kube-api-access-bfmdr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.614904 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance") pod "255bf0e7-10e4-4d84-8607-14c83ac28044" (UID: "255bf0e7-10e4-4d84-8607-14c83ac28044"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.627932 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-combined-ca-bundle" (OuterVolumeSpecName: "combined-ca-bundle") pod "255bf0e7-10e4-4d84-8607-14c83ac28044" (UID: "255bf0e7-10e4-4d84-8607-14c83ac28044"). InnerVolumeSpecName "combined-ca-bundle". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.643764 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-config-data" (OuterVolumeSpecName: "config-data") pod "255bf0e7-10e4-4d84-8607-14c83ac28044" (UID: "255bf0e7-10e4-4d84-8607-14c83ac28044"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.644304 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-internal-tls-certs" (OuterVolumeSpecName: "internal-tls-certs") pod "255bf0e7-10e4-4d84-8607-14c83ac28044" (UID: "255bf0e7-10e4-4d84-8607-14c83ac28044"). InnerVolumeSpecName "internal-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.645892 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-public-tls-certs" (OuterVolumeSpecName: "public-tls-certs") pod "255bf0e7-10e4-4d84-8607-14c83ac28044" (UID: "255bf0e7-10e4-4d84-8607-14c83ac28044"). InnerVolumeSpecName "public-tls-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.709895 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.709923 4751 reconciler_common.go:293] "Volume detached for volume \"combined-ca-bundle\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-combined-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.709935 4751 reconciler_common.go:293] "Volume detached for volume \"public-tls-certs\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-public-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.709943 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.709967 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.709976 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/255bf0e7-10e4-4d84-8607-14c83ac28044-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.709984 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/255bf0e7-10e4-4d84-8607-14c83ac28044-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.709998 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bfmdr\" (UniqueName: \"kubernetes.io/projected/255bf0e7-10e4-4d84-8607-14c83ac28044-kube-api-access-bfmdr\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.710014 4751 reconciler_common.go:293] "Volume detached for volume \"internal-tls-certs\" (UniqueName: \"kubernetes.io/secret/255bf0e7-10e4-4d84-8607-14c83ac28044-internal-tls-certs\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.725996 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Jan 31 15:01:40 crc kubenswrapper[4751]: I0131 15:01:40.811793 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.093646 4751 generic.go:334] "Generic (PLEG): container finished" podID="255bf0e7-10e4-4d84-8607-14c83ac28044" containerID="a08737138511460d108c2e58fd850d74f67395f3f43148d743fdd1308994567c" exitCode=0 Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.093695 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"255bf0e7-10e4-4d84-8607-14c83ac28044","Type":"ContainerDied","Data":"a08737138511460d108c2e58fd850d74f67395f3f43148d743fdd1308994567c"} Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.093713 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.093737 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"255bf0e7-10e4-4d84-8607-14c83ac28044","Type":"ContainerDied","Data":"f8952d07c6de0f133fd84db9683a5d98d942ed60617f64795edab9f81a8ffdfb"} Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.093760 4751 scope.go:117] "RemoveContainer" containerID="a08737138511460d108c2e58fd850d74f67395f3f43148d743fdd1308994567c" Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.119303 4751 scope.go:117] "RemoveContainer" containerID="ab0fad05949e27dc661aeaec62db4c80b70388679d98c799d711588e4b30d20c" Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.127597 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.133942 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.149817 4751 scope.go:117] "RemoveContainer" containerID="a08737138511460d108c2e58fd850d74f67395f3f43148d743fdd1308994567c" Jan 31 15:01:41 crc kubenswrapper[4751]: E0131 15:01:41.150303 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a08737138511460d108c2e58fd850d74f67395f3f43148d743fdd1308994567c\": container with ID starting with a08737138511460d108c2e58fd850d74f67395f3f43148d743fdd1308994567c not found: ID does not exist" containerID="a08737138511460d108c2e58fd850d74f67395f3f43148d743fdd1308994567c" Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.150336 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a08737138511460d108c2e58fd850d74f67395f3f43148d743fdd1308994567c"} err="failed to get container status \"a08737138511460d108c2e58fd850d74f67395f3f43148d743fdd1308994567c\": rpc error: code = NotFound desc = could not find container \"a08737138511460d108c2e58fd850d74f67395f3f43148d743fdd1308994567c\": container with ID starting with a08737138511460d108c2e58fd850d74f67395f3f43148d743fdd1308994567c not found: ID does not exist" Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.150359 4751 scope.go:117] "RemoveContainer" containerID="ab0fad05949e27dc661aeaec62db4c80b70388679d98c799d711588e4b30d20c" Jan 31 15:01:41 crc kubenswrapper[4751]: E0131 15:01:41.150843 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ab0fad05949e27dc661aeaec62db4c80b70388679d98c799d711588e4b30d20c\": container with ID starting with ab0fad05949e27dc661aeaec62db4c80b70388679d98c799d711588e4b30d20c not found: ID does not exist" containerID="ab0fad05949e27dc661aeaec62db4c80b70388679d98c799d711588e4b30d20c" Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.150906 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ab0fad05949e27dc661aeaec62db4c80b70388679d98c799d711588e4b30d20c"} err="failed to get container status \"ab0fad05949e27dc661aeaec62db4c80b70388679d98c799d711588e4b30d20c\": rpc error: code = NotFound desc = could not find container \"ab0fad05949e27dc661aeaec62db4c80b70388679d98c799d711588e4b30d20c\": container with ID starting with ab0fad05949e27dc661aeaec62db4c80b70388679d98c799d711588e4b30d20c not found: ID does not exist" Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.519000 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-fm54m"] Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.525177 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-fm54m"] Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.535466 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9"] Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.540831 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glancebd3b-account-delete-wcqq6"] Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.545835 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glancebd3b-account-delete-wcqq6"] Jan 31 15:01:41 crc kubenswrapper[4751]: I0131 15:01:41.550965 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-bd3b-account-create-update-nbdd9"] Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.087004 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-mcgm2"] Jan 31 15:01:42 crc kubenswrapper[4751]: E0131 15:01:42.087375 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255bf0e7-10e4-4d84-8607-14c83ac28044" containerName="glance-log" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.087391 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="255bf0e7-10e4-4d84-8607-14c83ac28044" containerName="glance-log" Jan 31 15:01:42 crc kubenswrapper[4751]: E0131 15:01:42.087416 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="255bf0e7-10e4-4d84-8607-14c83ac28044" containerName="glance-httpd" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.087424 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="255bf0e7-10e4-4d84-8607-14c83ac28044" containerName="glance-httpd" Jan 31 15:01:42 crc kubenswrapper[4751]: E0131 15:01:42.087456 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b7181ac-f336-4658-bffc-63553f8972d9" containerName="mariadb-account-delete" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.087468 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b7181ac-f336-4658-bffc-63553f8972d9" containerName="mariadb-account-delete" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.087623 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="255bf0e7-10e4-4d84-8607-14c83ac28044" containerName="glance-httpd" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.087641 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="255bf0e7-10e4-4d84-8607-14c83ac28044" containerName="glance-log" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.087660 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b7181ac-f336-4658-bffc-63553f8972d9" containerName="mariadb-account-delete" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.088342 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-mcgm2" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.093957 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-a977-account-create-update-tlstz"] Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.095051 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-a977-account-create-update-tlstz" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.099971 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.101716 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-mcgm2"] Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.111564 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-a977-account-create-update-tlstz"] Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.132044 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xtkk5\" (UniqueName: \"kubernetes.io/projected/e9730563-64d8-44a2-9d93-7fe5fcd4c8d4-kube-api-access-xtkk5\") pod \"glance-a977-account-create-update-tlstz\" (UID: \"e9730563-64d8-44a2-9d93-7fe5fcd4c8d4\") " pod="glance-kuttl-tests/glance-a977-account-create-update-tlstz" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.132135 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9730563-64d8-44a2-9d93-7fe5fcd4c8d4-operator-scripts\") pod \"glance-a977-account-create-update-tlstz\" (UID: \"e9730563-64d8-44a2-9d93-7fe5fcd4c8d4\") " pod="glance-kuttl-tests/glance-a977-account-create-update-tlstz" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.132186 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mlch\" (UniqueName: \"kubernetes.io/projected/d9e826f0-62a4-4a7c-8945-0c29cd34e667-kube-api-access-4mlch\") pod \"glance-db-create-mcgm2\" (UID: \"d9e826f0-62a4-4a7c-8945-0c29cd34e667\") " pod="glance-kuttl-tests/glance-db-create-mcgm2" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.132211 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9e826f0-62a4-4a7c-8945-0c29cd34e667-operator-scripts\") pod \"glance-db-create-mcgm2\" (UID: \"d9e826f0-62a4-4a7c-8945-0c29cd34e667\") " pod="glance-kuttl-tests/glance-db-create-mcgm2" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.233562 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xtkk5\" (UniqueName: \"kubernetes.io/projected/e9730563-64d8-44a2-9d93-7fe5fcd4c8d4-kube-api-access-xtkk5\") pod \"glance-a977-account-create-update-tlstz\" (UID: \"e9730563-64d8-44a2-9d93-7fe5fcd4c8d4\") " pod="glance-kuttl-tests/glance-a977-account-create-update-tlstz" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.233678 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9730563-64d8-44a2-9d93-7fe5fcd4c8d4-operator-scripts\") pod \"glance-a977-account-create-update-tlstz\" (UID: \"e9730563-64d8-44a2-9d93-7fe5fcd4c8d4\") " pod="glance-kuttl-tests/glance-a977-account-create-update-tlstz" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.234520 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9730563-64d8-44a2-9d93-7fe5fcd4c8d4-operator-scripts\") pod \"glance-a977-account-create-update-tlstz\" (UID: \"e9730563-64d8-44a2-9d93-7fe5fcd4c8d4\") " pod="glance-kuttl-tests/glance-a977-account-create-update-tlstz" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.234601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4mlch\" (UniqueName: \"kubernetes.io/projected/d9e826f0-62a4-4a7c-8945-0c29cd34e667-kube-api-access-4mlch\") pod \"glance-db-create-mcgm2\" (UID: \"d9e826f0-62a4-4a7c-8945-0c29cd34e667\") " pod="glance-kuttl-tests/glance-db-create-mcgm2" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.234632 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9e826f0-62a4-4a7c-8945-0c29cd34e667-operator-scripts\") pod \"glance-db-create-mcgm2\" (UID: \"d9e826f0-62a4-4a7c-8945-0c29cd34e667\") " pod="glance-kuttl-tests/glance-db-create-mcgm2" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.235591 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9e826f0-62a4-4a7c-8945-0c29cd34e667-operator-scripts\") pod \"glance-db-create-mcgm2\" (UID: \"d9e826f0-62a4-4a7c-8945-0c29cd34e667\") " pod="glance-kuttl-tests/glance-db-create-mcgm2" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.250454 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mlch\" (UniqueName: \"kubernetes.io/projected/d9e826f0-62a4-4a7c-8945-0c29cd34e667-kube-api-access-4mlch\") pod \"glance-db-create-mcgm2\" (UID: \"d9e826f0-62a4-4a7c-8945-0c29cd34e667\") " pod="glance-kuttl-tests/glance-db-create-mcgm2" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.252014 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xtkk5\" (UniqueName: \"kubernetes.io/projected/e9730563-64d8-44a2-9d93-7fe5fcd4c8d4-kube-api-access-xtkk5\") pod \"glance-a977-account-create-update-tlstz\" (UID: \"e9730563-64d8-44a2-9d93-7fe5fcd4c8d4\") " pod="glance-kuttl-tests/glance-a977-account-create-update-tlstz" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.408485 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-mcgm2" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.413519 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-a977-account-create-update-tlstz" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.415111 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="255bf0e7-10e4-4d84-8607-14c83ac28044" path="/var/lib/kubelet/pods/255bf0e7-10e4-4d84-8607-14c83ac28044/volumes" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.415850 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31decdae-8d23-4756-b743-4cd4f7709654" path="/var/lib/kubelet/pods/31decdae-8d23-4756-b743-4cd4f7709654/volumes" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.416395 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b7181ac-f336-4658-bffc-63553f8972d9" path="/var/lib/kubelet/pods/3b7181ac-f336-4658-bffc-63553f8972d9/volumes" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.417350 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b491fa19-1dde-4e28-919f-f120c0c772b7" path="/var/lib/kubelet/pods/b491fa19-1dde-4e28-919f-f120c0c772b7/volumes" Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.818403 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-mcgm2"] Jan 31 15:01:42 crc kubenswrapper[4751]: I0131 15:01:42.888867 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-a977-account-create-update-tlstz"] Jan 31 15:01:43 crc kubenswrapper[4751]: I0131 15:01:43.120117 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-mcgm2" event={"ID":"d9e826f0-62a4-4a7c-8945-0c29cd34e667","Type":"ContainerStarted","Data":"488f4cd159917294625dbe3f504270e4c6cae704ed670c29ddb28b43bab332ff"} Jan 31 15:01:43 crc kubenswrapper[4751]: I0131 15:01:43.120469 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-mcgm2" event={"ID":"d9e826f0-62a4-4a7c-8945-0c29cd34e667","Type":"ContainerStarted","Data":"bac6aaf151aa682a72b79c98606148bc73e67cc5fae8b0736586855de1506b67"} Jan 31 15:01:43 crc kubenswrapper[4751]: I0131 15:01:43.123318 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-a977-account-create-update-tlstz" event={"ID":"e9730563-64d8-44a2-9d93-7fe5fcd4c8d4","Type":"ContainerStarted","Data":"9903c977627bd13e9ad2f5f25c1001bf58623795a6fa400f5ca5b3724b524577"} Jan 31 15:01:43 crc kubenswrapper[4751]: I0131 15:01:43.123362 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-a977-account-create-update-tlstz" event={"ID":"e9730563-64d8-44a2-9d93-7fe5fcd4c8d4","Type":"ContainerStarted","Data":"74dd271892eb653086ff0009d74ed2a422288b125c2bf6d7587b2c354b96d3a2"} Jan 31 15:01:43 crc kubenswrapper[4751]: I0131 15:01:43.139946 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-create-mcgm2" podStartSLOduration=1.139924001 podStartE2EDuration="1.139924001s" podCreationTimestamp="2026-01-31 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:43.132402363 +0000 UTC m=+1207.507115248" watchObservedRunningTime="2026-01-31 15:01:43.139924001 +0000 UTC m=+1207.514636886" Jan 31 15:01:43 crc kubenswrapper[4751]: I0131 15:01:43.154085 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-a977-account-create-update-tlstz" podStartSLOduration=1.1540374230000001 podStartE2EDuration="1.154037423s" podCreationTimestamp="2026-01-31 15:01:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:43.153348235 +0000 UTC m=+1207.528061130" watchObservedRunningTime="2026-01-31 15:01:43.154037423 +0000 UTC m=+1207.528750308" Jan 31 15:01:44 crc kubenswrapper[4751]: I0131 15:01:44.130327 4751 generic.go:334] "Generic (PLEG): container finished" podID="d9e826f0-62a4-4a7c-8945-0c29cd34e667" containerID="488f4cd159917294625dbe3f504270e4c6cae704ed670c29ddb28b43bab332ff" exitCode=0 Jan 31 15:01:44 crc kubenswrapper[4751]: I0131 15:01:44.130422 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-mcgm2" event={"ID":"d9e826f0-62a4-4a7c-8945-0c29cd34e667","Type":"ContainerDied","Data":"488f4cd159917294625dbe3f504270e4c6cae704ed670c29ddb28b43bab332ff"} Jan 31 15:01:44 crc kubenswrapper[4751]: I0131 15:01:44.132431 4751 generic.go:334] "Generic (PLEG): container finished" podID="e9730563-64d8-44a2-9d93-7fe5fcd4c8d4" containerID="9903c977627bd13e9ad2f5f25c1001bf58623795a6fa400f5ca5b3724b524577" exitCode=0 Jan 31 15:01:44 crc kubenswrapper[4751]: I0131 15:01:44.132469 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-a977-account-create-update-tlstz" event={"ID":"e9730563-64d8-44a2-9d93-7fe5fcd4c8d4","Type":"ContainerDied","Data":"9903c977627bd13e9ad2f5f25c1001bf58623795a6fa400f5ca5b3724b524577"} Jan 31 15:01:45 crc kubenswrapper[4751]: I0131 15:01:45.536179 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-a977-account-create-update-tlstz" Jan 31 15:01:45 crc kubenswrapper[4751]: I0131 15:01:45.544918 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-mcgm2" Jan 31 15:01:45 crc kubenswrapper[4751]: I0131 15:01:45.590627 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9e826f0-62a4-4a7c-8945-0c29cd34e667-operator-scripts\") pod \"d9e826f0-62a4-4a7c-8945-0c29cd34e667\" (UID: \"d9e826f0-62a4-4a7c-8945-0c29cd34e667\") " Jan 31 15:01:45 crc kubenswrapper[4751]: I0131 15:01:45.590660 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9730563-64d8-44a2-9d93-7fe5fcd4c8d4-operator-scripts\") pod \"e9730563-64d8-44a2-9d93-7fe5fcd4c8d4\" (UID: \"e9730563-64d8-44a2-9d93-7fe5fcd4c8d4\") " Jan 31 15:01:45 crc kubenswrapper[4751]: I0131 15:01:45.590688 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xtkk5\" (UniqueName: \"kubernetes.io/projected/e9730563-64d8-44a2-9d93-7fe5fcd4c8d4-kube-api-access-xtkk5\") pod \"e9730563-64d8-44a2-9d93-7fe5fcd4c8d4\" (UID: \"e9730563-64d8-44a2-9d93-7fe5fcd4c8d4\") " Jan 31 15:01:45 crc kubenswrapper[4751]: I0131 15:01:45.590718 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4mlch\" (UniqueName: \"kubernetes.io/projected/d9e826f0-62a4-4a7c-8945-0c29cd34e667-kube-api-access-4mlch\") pod \"d9e826f0-62a4-4a7c-8945-0c29cd34e667\" (UID: \"d9e826f0-62a4-4a7c-8945-0c29cd34e667\") " Jan 31 15:01:45 crc kubenswrapper[4751]: I0131 15:01:45.591349 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9730563-64d8-44a2-9d93-7fe5fcd4c8d4-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "e9730563-64d8-44a2-9d93-7fe5fcd4c8d4" (UID: "e9730563-64d8-44a2-9d93-7fe5fcd4c8d4"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:01:45 crc kubenswrapper[4751]: I0131 15:01:45.591490 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d9e826f0-62a4-4a7c-8945-0c29cd34e667-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "d9e826f0-62a4-4a7c-8945-0c29cd34e667" (UID: "d9e826f0-62a4-4a7c-8945-0c29cd34e667"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:01:45 crc kubenswrapper[4751]: I0131 15:01:45.595742 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9730563-64d8-44a2-9d93-7fe5fcd4c8d4-kube-api-access-xtkk5" (OuterVolumeSpecName: "kube-api-access-xtkk5") pod "e9730563-64d8-44a2-9d93-7fe5fcd4c8d4" (UID: "e9730563-64d8-44a2-9d93-7fe5fcd4c8d4"). InnerVolumeSpecName "kube-api-access-xtkk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:45 crc kubenswrapper[4751]: I0131 15:01:45.595821 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9e826f0-62a4-4a7c-8945-0c29cd34e667-kube-api-access-4mlch" (OuterVolumeSpecName: "kube-api-access-4mlch") pod "d9e826f0-62a4-4a7c-8945-0c29cd34e667" (UID: "d9e826f0-62a4-4a7c-8945-0c29cd34e667"). InnerVolumeSpecName "kube-api-access-4mlch". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:45 crc kubenswrapper[4751]: I0131 15:01:45.691949 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/d9e826f0-62a4-4a7c-8945-0c29cd34e667-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:45 crc kubenswrapper[4751]: I0131 15:01:45.691975 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/e9730563-64d8-44a2-9d93-7fe5fcd4c8d4-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:45 crc kubenswrapper[4751]: I0131 15:01:45.691984 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xtkk5\" (UniqueName: \"kubernetes.io/projected/e9730563-64d8-44a2-9d93-7fe5fcd4c8d4-kube-api-access-xtkk5\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:45 crc kubenswrapper[4751]: I0131 15:01:45.691995 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4mlch\" (UniqueName: \"kubernetes.io/projected/d9e826f0-62a4-4a7c-8945-0c29cd34e667-kube-api-access-4mlch\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:46 crc kubenswrapper[4751]: I0131 15:01:46.168716 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-a977-account-create-update-tlstz" event={"ID":"e9730563-64d8-44a2-9d93-7fe5fcd4c8d4","Type":"ContainerDied","Data":"74dd271892eb653086ff0009d74ed2a422288b125c2bf6d7587b2c354b96d3a2"} Jan 31 15:01:46 crc kubenswrapper[4751]: I0131 15:01:46.168775 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="74dd271892eb653086ff0009d74ed2a422288b125c2bf6d7587b2c354b96d3a2" Jan 31 15:01:46 crc kubenswrapper[4751]: I0131 15:01:46.168888 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-a977-account-create-update-tlstz" Jan 31 15:01:46 crc kubenswrapper[4751]: I0131 15:01:46.171557 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-mcgm2" event={"ID":"d9e826f0-62a4-4a7c-8945-0c29cd34e667","Type":"ContainerDied","Data":"bac6aaf151aa682a72b79c98606148bc73e67cc5fae8b0736586855de1506b67"} Jan 31 15:01:46 crc kubenswrapper[4751]: I0131 15:01:46.171679 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bac6aaf151aa682a72b79c98606148bc73e67cc5fae8b0736586855de1506b67" Jan 31 15:01:46 crc kubenswrapper[4751]: I0131 15:01:46.171789 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-mcgm2" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.229756 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-mxvm7"] Jan 31 15:01:47 crc kubenswrapper[4751]: E0131 15:01:47.230371 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e9730563-64d8-44a2-9d93-7fe5fcd4c8d4" containerName="mariadb-account-create-update" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.230389 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9730563-64d8-44a2-9d93-7fe5fcd4c8d4" containerName="mariadb-account-create-update" Jan 31 15:01:47 crc kubenswrapper[4751]: E0131 15:01:47.230413 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d9e826f0-62a4-4a7c-8945-0c29cd34e667" containerName="mariadb-database-create" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.230420 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9e826f0-62a4-4a7c-8945-0c29cd34e667" containerName="mariadb-database-create" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.230551 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e9730563-64d8-44a2-9d93-7fe5fcd4c8d4" containerName="mariadb-account-create-update" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.230571 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d9e826f0-62a4-4a7c-8945-0c29cd34e667" containerName="mariadb-database-create" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.231022 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-mxvm7" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.234730 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.239003 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-6mvx9" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.245707 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-mxvm7"] Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.314832 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbf741e4-9445-4080-84f2-601e270f7aa0-config-data\") pod \"glance-db-sync-mxvm7\" (UID: \"dbf741e4-9445-4080-84f2-601e270f7aa0\") " pod="glance-kuttl-tests/glance-db-sync-mxvm7" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.314906 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dbf741e4-9445-4080-84f2-601e270f7aa0-db-sync-config-data\") pod \"glance-db-sync-mxvm7\" (UID: \"dbf741e4-9445-4080-84f2-601e270f7aa0\") " pod="glance-kuttl-tests/glance-db-sync-mxvm7" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.315019 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hczzd\" (UniqueName: \"kubernetes.io/projected/dbf741e4-9445-4080-84f2-601e270f7aa0-kube-api-access-hczzd\") pod \"glance-db-sync-mxvm7\" (UID: \"dbf741e4-9445-4080-84f2-601e270f7aa0\") " pod="glance-kuttl-tests/glance-db-sync-mxvm7" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.415855 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dbf741e4-9445-4080-84f2-601e270f7aa0-db-sync-config-data\") pod \"glance-db-sync-mxvm7\" (UID: \"dbf741e4-9445-4080-84f2-601e270f7aa0\") " pod="glance-kuttl-tests/glance-db-sync-mxvm7" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.415914 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hczzd\" (UniqueName: \"kubernetes.io/projected/dbf741e4-9445-4080-84f2-601e270f7aa0-kube-api-access-hczzd\") pod \"glance-db-sync-mxvm7\" (UID: \"dbf741e4-9445-4080-84f2-601e270f7aa0\") " pod="glance-kuttl-tests/glance-db-sync-mxvm7" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.415981 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbf741e4-9445-4080-84f2-601e270f7aa0-config-data\") pod \"glance-db-sync-mxvm7\" (UID: \"dbf741e4-9445-4080-84f2-601e270f7aa0\") " pod="glance-kuttl-tests/glance-db-sync-mxvm7" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.420162 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbf741e4-9445-4080-84f2-601e270f7aa0-config-data\") pod \"glance-db-sync-mxvm7\" (UID: \"dbf741e4-9445-4080-84f2-601e270f7aa0\") " pod="glance-kuttl-tests/glance-db-sync-mxvm7" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.420615 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dbf741e4-9445-4080-84f2-601e270f7aa0-db-sync-config-data\") pod \"glance-db-sync-mxvm7\" (UID: \"dbf741e4-9445-4080-84f2-601e270f7aa0\") " pod="glance-kuttl-tests/glance-db-sync-mxvm7" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.432459 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hczzd\" (UniqueName: \"kubernetes.io/projected/dbf741e4-9445-4080-84f2-601e270f7aa0-kube-api-access-hczzd\") pod \"glance-db-sync-mxvm7\" (UID: \"dbf741e4-9445-4080-84f2-601e270f7aa0\") " pod="glance-kuttl-tests/glance-db-sync-mxvm7" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.547568 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-mxvm7" Jan 31 15:01:47 crc kubenswrapper[4751]: I0131 15:01:47.963498 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-mxvm7"] Jan 31 15:01:48 crc kubenswrapper[4751]: I0131 15:01:48.191830 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-mxvm7" event={"ID":"dbf741e4-9445-4080-84f2-601e270f7aa0","Type":"ContainerStarted","Data":"85b52c03617c4a88d648ff21fe628b61e341e359552690aead8461966d078b23"} Jan 31 15:01:49 crc kubenswrapper[4751]: I0131 15:01:49.200422 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-mxvm7" event={"ID":"dbf741e4-9445-4080-84f2-601e270f7aa0","Type":"ContainerStarted","Data":"f75375e8e6ad82f0f02e30825660a61882c0595e19792c2979a8125e9bf94686"} Jan 31 15:01:49 crc kubenswrapper[4751]: I0131 15:01:49.217541 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-mxvm7" podStartSLOduration=2.217520098 podStartE2EDuration="2.217520098s" podCreationTimestamp="2026-01-31 15:01:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:49.215997558 +0000 UTC m=+1213.590710443" watchObservedRunningTime="2026-01-31 15:01:49.217520098 +0000 UTC m=+1213.592232983" Jan 31 15:01:52 crc kubenswrapper[4751]: I0131 15:01:52.228792 4751 generic.go:334] "Generic (PLEG): container finished" podID="dbf741e4-9445-4080-84f2-601e270f7aa0" containerID="f75375e8e6ad82f0f02e30825660a61882c0595e19792c2979a8125e9bf94686" exitCode=0 Jan 31 15:01:52 crc kubenswrapper[4751]: I0131 15:01:52.228863 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-mxvm7" event={"ID":"dbf741e4-9445-4080-84f2-601e270f7aa0","Type":"ContainerDied","Data":"f75375e8e6ad82f0f02e30825660a61882c0595e19792c2979a8125e9bf94686"} Jan 31 15:01:53 crc kubenswrapper[4751]: I0131 15:01:53.551470 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-mxvm7" Jan 31 15:01:53 crc kubenswrapper[4751]: I0131 15:01:53.630930 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hczzd\" (UniqueName: \"kubernetes.io/projected/dbf741e4-9445-4080-84f2-601e270f7aa0-kube-api-access-hczzd\") pod \"dbf741e4-9445-4080-84f2-601e270f7aa0\" (UID: \"dbf741e4-9445-4080-84f2-601e270f7aa0\") " Jan 31 15:01:53 crc kubenswrapper[4751]: I0131 15:01:53.630988 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dbf741e4-9445-4080-84f2-601e270f7aa0-db-sync-config-data\") pod \"dbf741e4-9445-4080-84f2-601e270f7aa0\" (UID: \"dbf741e4-9445-4080-84f2-601e270f7aa0\") " Jan 31 15:01:53 crc kubenswrapper[4751]: I0131 15:01:53.631017 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbf741e4-9445-4080-84f2-601e270f7aa0-config-data\") pod \"dbf741e4-9445-4080-84f2-601e270f7aa0\" (UID: \"dbf741e4-9445-4080-84f2-601e270f7aa0\") " Jan 31 15:01:53 crc kubenswrapper[4751]: I0131 15:01:53.636014 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf741e4-9445-4080-84f2-601e270f7aa0-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "dbf741e4-9445-4080-84f2-601e270f7aa0" (UID: "dbf741e4-9445-4080-84f2-601e270f7aa0"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:53 crc kubenswrapper[4751]: I0131 15:01:53.636461 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dbf741e4-9445-4080-84f2-601e270f7aa0-kube-api-access-hczzd" (OuterVolumeSpecName: "kube-api-access-hczzd") pod "dbf741e4-9445-4080-84f2-601e270f7aa0" (UID: "dbf741e4-9445-4080-84f2-601e270f7aa0"). InnerVolumeSpecName "kube-api-access-hczzd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:53 crc kubenswrapper[4751]: I0131 15:01:53.664428 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dbf741e4-9445-4080-84f2-601e270f7aa0-config-data" (OuterVolumeSpecName: "config-data") pod "dbf741e4-9445-4080-84f2-601e270f7aa0" (UID: "dbf741e4-9445-4080-84f2-601e270f7aa0"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:53 crc kubenswrapper[4751]: I0131 15:01:53.732147 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hczzd\" (UniqueName: \"kubernetes.io/projected/dbf741e4-9445-4080-84f2-601e270f7aa0-kube-api-access-hczzd\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:53 crc kubenswrapper[4751]: I0131 15:01:53.732193 4751 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/dbf741e4-9445-4080-84f2-601e270f7aa0-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:53 crc kubenswrapper[4751]: I0131 15:01:53.732207 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dbf741e4-9445-4080-84f2-601e270f7aa0-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:54 crc kubenswrapper[4751]: I0131 15:01:54.264437 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-mxvm7" event={"ID":"dbf741e4-9445-4080-84f2-601e270f7aa0","Type":"ContainerDied","Data":"85b52c03617c4a88d648ff21fe628b61e341e359552690aead8461966d078b23"} Jan 31 15:01:54 crc kubenswrapper[4751]: I0131 15:01:54.264484 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85b52c03617c4a88d648ff21fe628b61e341e359552690aead8461966d078b23" Jan 31 15:01:54 crc kubenswrapper[4751]: I0131 15:01:54.264504 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-mxvm7" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.462664 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:01:55 crc kubenswrapper[4751]: E0131 15:01:55.463143 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dbf741e4-9445-4080-84f2-601e270f7aa0" containerName="glance-db-sync" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.463155 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dbf741e4-9445-4080-84f2-601e270f7aa0" containerName="glance-db-sync" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.463295 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="dbf741e4-9445-4080-84f2-601e270f7aa0" containerName="glance-db-sync" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.464144 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.467671 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-external-config-data" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.468321 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.472015 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-6mvx9" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.485017 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.571293 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.571584 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.571696 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.571792 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.571898 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w868q\" (UniqueName: \"kubernetes.io/projected/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-kube-api-access-w868q\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.572020 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.572143 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.572239 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.572351 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-logs\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.572436 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-run\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.572552 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.572650 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.572734 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-sys\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.572838 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-dev\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.621787 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.623095 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.626547 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.647399 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.678738 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.678784 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.678806 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.678831 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.678853 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w868q\" (UniqueName: \"kubernetes.io/projected/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-kube-api-access-w868q\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.678881 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.678903 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.678924 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.678941 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-logs\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.678958 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-run\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.678972 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.678980 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.679039 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-sys\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.679062 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.679120 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-dev\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.679228 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") device mount path \"/mnt/openstack/pv20\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.679302 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-sys\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.679281 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-dev\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.679456 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.679494 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.679594 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.679641 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.679647 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-run\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.679649 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") device mount path \"/mnt/openstack/pv17\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.680081 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-logs\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.684488 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-config-data\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.687089 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-scripts\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.702355 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w868q\" (UniqueName: \"kubernetes.io/projected/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-kube-api-access-w868q\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.708360 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.718912 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-external-api-0\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.780270 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-run\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.780625 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-54nck\" (UniqueName: \"kubernetes.io/projected/221322d6-160f-48ee-bed1-a02ac6cbfb09-kube-api-access-54nck\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.780434 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.780659 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/221322d6-160f-48ee-bed1-a02ac6cbfb09-logs\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.780854 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-sys\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.780898 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.780959 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.781299 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.781423 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-dev\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.781486 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.781569 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/221322d6-160f-48ee-bed1-a02ac6cbfb09-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.781703 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.781777 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.781848 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221322d6-160f-48ee-bed1-a02ac6cbfb09-config-data\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.781918 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/221322d6-160f-48ee-bed1-a02ac6cbfb09-scripts\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883136 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-54nck\" (UniqueName: \"kubernetes.io/projected/221322d6-160f-48ee-bed1-a02ac6cbfb09-kube-api-access-54nck\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883196 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/221322d6-160f-48ee-bed1-a02ac6cbfb09-logs\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883238 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-sys\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883260 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883296 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883339 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883362 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883396 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-dev\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883424 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/221322d6-160f-48ee-bed1-a02ac6cbfb09-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883485 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-dev\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883509 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883495 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883568 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-sys\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883653 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883663 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883689 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883713 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221322d6-160f-48ee-bed1-a02ac6cbfb09-config-data\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883736 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/221322d6-160f-48ee-bed1-a02ac6cbfb09-scripts\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883740 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") device mount path \"/mnt/openstack/pv02\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883776 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-run\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883864 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-run\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883911 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.883959 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.884089 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/221322d6-160f-48ee-bed1-a02ac6cbfb09-logs\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.884123 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/221322d6-160f-48ee-bed1-a02ac6cbfb09-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.888809 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/221322d6-160f-48ee-bed1-a02ac6cbfb09-scripts\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.890021 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221322d6-160f-48ee-bed1-a02ac6cbfb09-config-data\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.902603 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.902840 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.916104 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-54nck\" (UniqueName: \"kubernetes.io/projected/221322d6-160f-48ee-bed1-a02ac6cbfb09-kube-api-access-54nck\") pod \"glance-default-internal-api-0\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:55 crc kubenswrapper[4751]: I0131 15:01:55.935422 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:56 crc kubenswrapper[4751]: I0131 15:01:56.196352 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:01:56 crc kubenswrapper[4751]: I0131 15:01:56.280621 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"5c99f5b1-8566-4141-9bd4-71a75e7f43b6","Type":"ContainerStarted","Data":"632343238b8b6273cfe0d462a1823f7261ef1f48b55a453cfe7a4028e8a3bc11"} Jan 31 15:01:56 crc kubenswrapper[4751]: I0131 15:01:56.340177 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:01:56 crc kubenswrapper[4751]: I0131 15:01:56.733806 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:01:57 crc kubenswrapper[4751]: I0131 15:01:57.291201 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"5c99f5b1-8566-4141-9bd4-71a75e7f43b6","Type":"ContainerStarted","Data":"b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12"} Jan 31 15:01:57 crc kubenswrapper[4751]: I0131 15:01:57.291859 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"5c99f5b1-8566-4141-9bd4-71a75e7f43b6","Type":"ContainerStarted","Data":"88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8"} Jan 31 15:01:57 crc kubenswrapper[4751]: I0131 15:01:57.292889 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"221322d6-160f-48ee-bed1-a02ac6cbfb09","Type":"ContainerStarted","Data":"2d17a7d49f7479975731597d7e17ac81d17ead2b622a0ef7093e781d499f7009"} Jan 31 15:01:57 crc kubenswrapper[4751]: I0131 15:01:57.292937 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"221322d6-160f-48ee-bed1-a02ac6cbfb09","Type":"ContainerStarted","Data":"2145d899923840c33715fa17628307294c5047421a38139dabc06cf3d05cb997"} Jan 31 15:01:57 crc kubenswrapper[4751]: I0131 15:01:57.292951 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"221322d6-160f-48ee-bed1-a02ac6cbfb09","Type":"ContainerStarted","Data":"31e67ae7532286bf7c53890d945174ea89dd8a711c528d0711e4c1c63616c6e2"} Jan 31 15:01:58 crc kubenswrapper[4751]: I0131 15:01:58.300809 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"5c99f5b1-8566-4141-9bd4-71a75e7f43b6","Type":"ContainerStarted","Data":"979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee"} Jan 31 15:01:58 crc kubenswrapper[4751]: I0131 15:01:58.303535 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"221322d6-160f-48ee-bed1-a02ac6cbfb09","Type":"ContainerStarted","Data":"a97088bc226d5155802489f7ac6a208ee9b1cacfbbb954588201d395f2a07500"} Jan 31 15:01:58 crc kubenswrapper[4751]: I0131 15:01:58.303697 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="221322d6-160f-48ee-bed1-a02ac6cbfb09" containerName="glance-log" containerID="cri-o://2145d899923840c33715fa17628307294c5047421a38139dabc06cf3d05cb997" gracePeriod=30 Jan 31 15:01:58 crc kubenswrapper[4751]: I0131 15:01:58.303823 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="221322d6-160f-48ee-bed1-a02ac6cbfb09" containerName="glance-httpd" containerID="cri-o://2d17a7d49f7479975731597d7e17ac81d17ead2b622a0ef7093e781d499f7009" gracePeriod=30 Jan 31 15:01:58 crc kubenswrapper[4751]: I0131 15:01:58.303836 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="221322d6-160f-48ee-bed1-a02ac6cbfb09" containerName="glance-api" containerID="cri-o://a97088bc226d5155802489f7ac6a208ee9b1cacfbbb954588201d395f2a07500" gracePeriod=30 Jan 31 15:01:58 crc kubenswrapper[4751]: I0131 15:01:58.333026 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=3.333010475 podStartE2EDuration="3.333010475s" podCreationTimestamp="2026-01-31 15:01:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:58.331671209 +0000 UTC m=+1222.706384094" watchObservedRunningTime="2026-01-31 15:01:58.333010475 +0000 UTC m=+1222.707723360" Jan 31 15:01:58 crc kubenswrapper[4751]: I0131 15:01:58.363515 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=4.363494718 podStartE2EDuration="4.363494718s" podCreationTimestamp="2026-01-31 15:01:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:01:58.360409527 +0000 UTC m=+1222.735122432" watchObservedRunningTime="2026-01-31 15:01:58.363494718 +0000 UTC m=+1222.738207603" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.313243 4751 generic.go:334] "Generic (PLEG): container finished" podID="221322d6-160f-48ee-bed1-a02ac6cbfb09" containerID="a97088bc226d5155802489f7ac6a208ee9b1cacfbbb954588201d395f2a07500" exitCode=143 Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.313606 4751 generic.go:334] "Generic (PLEG): container finished" podID="221322d6-160f-48ee-bed1-a02ac6cbfb09" containerID="2d17a7d49f7479975731597d7e17ac81d17ead2b622a0ef7093e781d499f7009" exitCode=143 Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.313614 4751 generic.go:334] "Generic (PLEG): container finished" podID="221322d6-160f-48ee-bed1-a02ac6cbfb09" containerID="2145d899923840c33715fa17628307294c5047421a38139dabc06cf3d05cb997" exitCode=143 Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.313306 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"221322d6-160f-48ee-bed1-a02ac6cbfb09","Type":"ContainerDied","Data":"a97088bc226d5155802489f7ac6a208ee9b1cacfbbb954588201d395f2a07500"} Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.313728 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"221322d6-160f-48ee-bed1-a02ac6cbfb09","Type":"ContainerDied","Data":"2d17a7d49f7479975731597d7e17ac81d17ead2b622a0ef7093e781d499f7009"} Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.313745 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"221322d6-160f-48ee-bed1-a02ac6cbfb09","Type":"ContainerDied","Data":"2145d899923840c33715fa17628307294c5047421a38139dabc06cf3d05cb997"} Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.313759 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"221322d6-160f-48ee-bed1-a02ac6cbfb09","Type":"ContainerDied","Data":"31e67ae7532286bf7c53890d945174ea89dd8a711c528d0711e4c1c63616c6e2"} Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.313772 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="31e67ae7532286bf7c53890d945174ea89dd8a711c528d0711e4c1c63616c6e2" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.345946 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437385 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-etc-nvme\") pod \"221322d6-160f-48ee-bed1-a02ac6cbfb09\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437419 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-etc-iscsi\") pod \"221322d6-160f-48ee-bed1-a02ac6cbfb09\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437449 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/221322d6-160f-48ee-bed1-a02ac6cbfb09-scripts\") pod \"221322d6-160f-48ee-bed1-a02ac6cbfb09\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437504 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-var-locks-brick\") pod \"221322d6-160f-48ee-bed1-a02ac6cbfb09\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437521 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"221322d6-160f-48ee-bed1-a02ac6cbfb09\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437538 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "221322d6-160f-48ee-bed1-a02ac6cbfb09" (UID: "221322d6-160f-48ee-bed1-a02ac6cbfb09"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437576 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "221322d6-160f-48ee-bed1-a02ac6cbfb09" (UID: "221322d6-160f-48ee-bed1-a02ac6cbfb09"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437538 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "221322d6-160f-48ee-bed1-a02ac6cbfb09" (UID: "221322d6-160f-48ee-bed1-a02ac6cbfb09"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437560 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-54nck\" (UniqueName: \"kubernetes.io/projected/221322d6-160f-48ee-bed1-a02ac6cbfb09-kube-api-access-54nck\") pod \"221322d6-160f-48ee-bed1-a02ac6cbfb09\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437635 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"221322d6-160f-48ee-bed1-a02ac6cbfb09\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437751 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/221322d6-160f-48ee-bed1-a02ac6cbfb09-logs\") pod \"221322d6-160f-48ee-bed1-a02ac6cbfb09\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437773 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-dev\") pod \"221322d6-160f-48ee-bed1-a02ac6cbfb09\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437793 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-lib-modules\") pod \"221322d6-160f-48ee-bed1-a02ac6cbfb09\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437839 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-sys\") pod \"221322d6-160f-48ee-bed1-a02ac6cbfb09\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437858 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/221322d6-160f-48ee-bed1-a02ac6cbfb09-httpd-run\") pod \"221322d6-160f-48ee-bed1-a02ac6cbfb09\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437883 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221322d6-160f-48ee-bed1-a02ac6cbfb09-config-data\") pod \"221322d6-160f-48ee-bed1-a02ac6cbfb09\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.437908 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-run\") pod \"221322d6-160f-48ee-bed1-a02ac6cbfb09\" (UID: \"221322d6-160f-48ee-bed1-a02ac6cbfb09\") " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.438157 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "221322d6-160f-48ee-bed1-a02ac6cbfb09" (UID: "221322d6-160f-48ee-bed1-a02ac6cbfb09"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.438209 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-run" (OuterVolumeSpecName: "run") pod "221322d6-160f-48ee-bed1-a02ac6cbfb09" (UID: "221322d6-160f-48ee-bed1-a02ac6cbfb09"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.438270 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-sys" (OuterVolumeSpecName: "sys") pod "221322d6-160f-48ee-bed1-a02ac6cbfb09" (UID: "221322d6-160f-48ee-bed1-a02ac6cbfb09"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.438493 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.438515 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.438527 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.438537 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.438547 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.438558 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.438586 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-dev" (OuterVolumeSpecName: "dev") pod "221322d6-160f-48ee-bed1-a02ac6cbfb09" (UID: "221322d6-160f-48ee-bed1-a02ac6cbfb09"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.438848 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/221322d6-160f-48ee-bed1-a02ac6cbfb09-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "221322d6-160f-48ee-bed1-a02ac6cbfb09" (UID: "221322d6-160f-48ee-bed1-a02ac6cbfb09"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.438907 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/221322d6-160f-48ee-bed1-a02ac6cbfb09-logs" (OuterVolumeSpecName: "logs") pod "221322d6-160f-48ee-bed1-a02ac6cbfb09" (UID: "221322d6-160f-48ee-bed1-a02ac6cbfb09"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.444024 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance-cache") pod "221322d6-160f-48ee-bed1-a02ac6cbfb09" (UID: "221322d6-160f-48ee-bed1-a02ac6cbfb09"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.444438 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "221322d6-160f-48ee-bed1-a02ac6cbfb09" (UID: "221322d6-160f-48ee-bed1-a02ac6cbfb09"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.445061 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/221322d6-160f-48ee-bed1-a02ac6cbfb09-kube-api-access-54nck" (OuterVolumeSpecName: "kube-api-access-54nck") pod "221322d6-160f-48ee-bed1-a02ac6cbfb09" (UID: "221322d6-160f-48ee-bed1-a02ac6cbfb09"). InnerVolumeSpecName "kube-api-access-54nck". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.469301 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/221322d6-160f-48ee-bed1-a02ac6cbfb09-scripts" (OuterVolumeSpecName: "scripts") pod "221322d6-160f-48ee-bed1-a02ac6cbfb09" (UID: "221322d6-160f-48ee-bed1-a02ac6cbfb09"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.518437 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/221322d6-160f-48ee-bed1-a02ac6cbfb09-config-data" (OuterVolumeSpecName: "config-data") pod "221322d6-160f-48ee-bed1-a02ac6cbfb09" (UID: "221322d6-160f-48ee-bed1-a02ac6cbfb09"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.539551 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.539596 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.539614 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-54nck\" (UniqueName: \"kubernetes.io/projected/221322d6-160f-48ee-bed1-a02ac6cbfb09-kube-api-access-54nck\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.539629 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/221322d6-160f-48ee-bed1-a02ac6cbfb09-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.539641 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/221322d6-160f-48ee-bed1-a02ac6cbfb09-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.539653 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/221322d6-160f-48ee-bed1-a02ac6cbfb09-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.539665 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/221322d6-160f-48ee-bed1-a02ac6cbfb09-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.539675 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/221322d6-160f-48ee-bed1-a02ac6cbfb09-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.564427 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.568725 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.641281 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:01:59 crc kubenswrapper[4751]: I0131 15:01:59.641324 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.321012 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.365879 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.374296 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.386349 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:00 crc kubenswrapper[4751]: E0131 15:02:00.386647 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221322d6-160f-48ee-bed1-a02ac6cbfb09" containerName="glance-log" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.386670 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="221322d6-160f-48ee-bed1-a02ac6cbfb09" containerName="glance-log" Jan 31 15:02:00 crc kubenswrapper[4751]: E0131 15:02:00.386694 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221322d6-160f-48ee-bed1-a02ac6cbfb09" containerName="glance-httpd" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.386703 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="221322d6-160f-48ee-bed1-a02ac6cbfb09" containerName="glance-httpd" Jan 31 15:02:00 crc kubenswrapper[4751]: E0131 15:02:00.386710 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="221322d6-160f-48ee-bed1-a02ac6cbfb09" containerName="glance-api" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.386717 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="221322d6-160f-48ee-bed1-a02ac6cbfb09" containerName="glance-api" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.386850 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="221322d6-160f-48ee-bed1-a02ac6cbfb09" containerName="glance-httpd" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.386875 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="221322d6-160f-48ee-bed1-a02ac6cbfb09" containerName="glance-log" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.386892 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="221322d6-160f-48ee-bed1-a02ac6cbfb09" containerName="glance-api" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.387873 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.390477 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.403079 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.417546 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="221322d6-160f-48ee-bed1-a02ac6cbfb09" path="/var/lib/kubelet/pods/221322d6-160f-48ee-bed1-a02ac6cbfb09/volumes" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.453276 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-run\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.453326 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.453350 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.453371 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.453403 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.453441 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b77b88-19a5-4bdc-87a1-6a65273226a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.453498 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.453552 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.453616 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-dev\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.453648 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l8cqp\" (UniqueName: \"kubernetes.io/projected/f0b77b88-19a5-4bdc-87a1-6a65273226a2-kube-api-access-l8cqp\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.453898 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0b77b88-19a5-4bdc-87a1-6a65273226a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.453963 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0b77b88-19a5-4bdc-87a1-6a65273226a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.453996 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0b77b88-19a5-4bdc-87a1-6a65273226a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.454019 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-sys\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.554836 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-dev\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.554913 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l8cqp\" (UniqueName: \"kubernetes.io/projected/f0b77b88-19a5-4bdc-87a1-6a65273226a2-kube-api-access-l8cqp\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.554973 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0b77b88-19a5-4bdc-87a1-6a65273226a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.554981 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-dev\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555005 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0b77b88-19a5-4bdc-87a1-6a65273226a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555021 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0b77b88-19a5-4bdc-87a1-6a65273226a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555040 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-sys\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555098 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555160 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-sys\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555201 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555270 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555216 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-run\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555307 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555331 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555409 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b77b88-19a5-4bdc-87a1-6a65273226a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555430 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555356 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555338 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-run\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555513 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555377 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555606 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555633 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") device mount path \"/mnt/openstack/pv02\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555673 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.555997 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0b77b88-19a5-4bdc-87a1-6a65273226a2-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.556207 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0b77b88-19a5-4bdc-87a1-6a65273226a2-logs\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.560051 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0b77b88-19a5-4bdc-87a1-6a65273226a2-scripts\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.560741 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b77b88-19a5-4bdc-87a1-6a65273226a2-config-data\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.571729 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l8cqp\" (UniqueName: \"kubernetes.io/projected/f0b77b88-19a5-4bdc-87a1-6a65273226a2-kube-api-access-l8cqp\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.575261 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.583269 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-internal-api-0\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:00 crc kubenswrapper[4751]: I0131 15:02:00.709989 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:01 crc kubenswrapper[4751]: I0131 15:02:01.189880 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:01 crc kubenswrapper[4751]: I0131 15:02:01.330551 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"f0b77b88-19a5-4bdc-87a1-6a65273226a2","Type":"ContainerStarted","Data":"58907fb108567ccc157e944740b878da74b00dd4afd8f71e705346251d50d030"} Jan 31 15:02:02 crc kubenswrapper[4751]: I0131 15:02:02.340139 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"f0b77b88-19a5-4bdc-87a1-6a65273226a2","Type":"ContainerStarted","Data":"d00daa5873d01a2a52917e99a159d4dd523630cf9771ab415ea43dc1ab2768ec"} Jan 31 15:02:02 crc kubenswrapper[4751]: I0131 15:02:02.340691 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"f0b77b88-19a5-4bdc-87a1-6a65273226a2","Type":"ContainerStarted","Data":"e332c13695ee9418872977980d66d846c226e037834cc39d0c88b742a39fc6a9"} Jan 31 15:02:02 crc kubenswrapper[4751]: I0131 15:02:02.340706 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"f0b77b88-19a5-4bdc-87a1-6a65273226a2","Type":"ContainerStarted","Data":"cdf19d70be1f20e70e6253f8f6e27452c7be5e6f13392a31b245256551aa31c1"} Jan 31 15:02:02 crc kubenswrapper[4751]: I0131 15:02:02.369886 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.369866608 podStartE2EDuration="2.369866608s" podCreationTimestamp="2026-01-31 15:02:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:02:02.364843275 +0000 UTC m=+1226.739556150" watchObservedRunningTime="2026-01-31 15:02:02.369866608 +0000 UTC m=+1226.744579493" Jan 31 15:02:05 crc kubenswrapper[4751]: I0131 15:02:05.781365 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:05 crc kubenswrapper[4751]: I0131 15:02:05.782030 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:05 crc kubenswrapper[4751]: I0131 15:02:05.782060 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:05 crc kubenswrapper[4751]: I0131 15:02:05.807787 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:05 crc kubenswrapper[4751]: I0131 15:02:05.815643 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:05 crc kubenswrapper[4751]: I0131 15:02:05.833397 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:06 crc kubenswrapper[4751]: I0131 15:02:06.376936 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:06 crc kubenswrapper[4751]: I0131 15:02:06.377016 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:06 crc kubenswrapper[4751]: I0131 15:02:06.377043 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:06 crc kubenswrapper[4751]: I0131 15:02:06.395678 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:06 crc kubenswrapper[4751]: I0131 15:02:06.396808 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:06 crc kubenswrapper[4751]: I0131 15:02:06.399226 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:08 crc kubenswrapper[4751]: I0131 15:02:08.896923 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:02:08 crc kubenswrapper[4751]: I0131 15:02:08.897527 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:02:10 crc kubenswrapper[4751]: I0131 15:02:10.710967 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:10 crc kubenswrapper[4751]: I0131 15:02:10.711274 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:10 crc kubenswrapper[4751]: I0131 15:02:10.711285 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:10 crc kubenswrapper[4751]: I0131 15:02:10.744500 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:10 crc kubenswrapper[4751]: I0131 15:02:10.753213 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:10 crc kubenswrapper[4751]: I0131 15:02:10.776691 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:11 crc kubenswrapper[4751]: I0131 15:02:11.421808 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:11 crc kubenswrapper[4751]: I0131 15:02:11.421857 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:11 crc kubenswrapper[4751]: I0131 15:02:11.421866 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:11 crc kubenswrapper[4751]: I0131 15:02:11.435409 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:11 crc kubenswrapper[4751]: I0131 15:02:11.435486 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:11 crc kubenswrapper[4751]: I0131 15:02:11.436275 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.712029 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.715264 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.726236 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.727603 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.743376 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.777300 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.810097 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.811736 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.821000 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.822868 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.836134 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.846574 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.867698 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.867745 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-sys\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.867773 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.867796 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vkrk5\" (UniqueName: \"kubernetes.io/projected/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-kube-api-access-vkrk5\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.867834 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-config-data\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.867853 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.867869 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.867885 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t25kc\" (UniqueName: \"kubernetes.io/projected/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-kube-api-access-t25kc\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.867908 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-logs\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.867927 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-logs\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.867942 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.867956 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-dev\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.867972 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.867995 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.868011 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-dev\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.868030 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-run\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.868043 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-scripts\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.868063 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-scripts\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.868101 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.868204 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.868298 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-sys\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.868415 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.868483 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.868504 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.868523 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-config-data\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.868544 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.869424 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.869481 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-run\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.971336 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-t25kc\" (UniqueName: \"kubernetes.io/projected/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-kube-api-access-t25kc\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.971608 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-logs\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.971688 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-dev\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.971800 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.972120 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-logs\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.972647 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-logs\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.972677 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.972702 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-dev\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.972724 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.972744 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.972764 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.972779 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-scripts\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.972822 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.972840 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-dev\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.972862 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95acd323-0a11-4e25-8439-f848c8811df5-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.972917 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-run\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.972967 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-scripts\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.972986 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.972996 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-logs\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973012 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-run\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973051 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-dev\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973101 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-scripts\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973148 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973169 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973189 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-logs\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973210 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973244 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-sys\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973269 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973299 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973353 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-sys\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973356 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973377 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973553 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-dev\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973596 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-config-data\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973614 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973628 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-sys\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973786 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") device mount path \"/mnt/openstack/pv15\"" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973815 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.973846 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-run\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.974033 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.974974 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfkw8\" (UniqueName: \"kubernetes.io/projected/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-kube-api-access-lfkw8\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975050 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975085 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vklgz\" (UniqueName: \"kubernetes.io/projected/95acd323-0a11-4e25-8439-f848c8811df5-kube-api-access-vklgz\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975163 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-dev\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975218 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975246 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975267 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-config-data\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975277 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975288 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975282 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975326 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975353 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975469 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975546 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95acd323-0a11-4e25-8439-f848c8811df5-config-data\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975573 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-run\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975588 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975613 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975645 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-run\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975697 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975730 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975692 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975846 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95acd323-0a11-4e25-8439-f848c8811df5-scripts\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975894 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95acd323-0a11-4e25-8439-f848c8811df5-logs\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975918 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975939 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-sys\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.975954 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-sys\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.976034 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.976051 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.976113 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vkrk5\" (UniqueName: \"kubernetes.io/projected/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-kube-api-access-vkrk5\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.976126 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.976138 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-run\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.976064 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-sys\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.976195 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-config-data\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.976223 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.976246 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.976269 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.976516 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.976663 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.980346 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-scripts\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.983212 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-config-data\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.984354 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-config-data\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.988166 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-t25kc\" (UniqueName: \"kubernetes.io/projected/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-kube-api-access-t25kc\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.988675 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-scripts\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:13 crc kubenswrapper[4751]: I0131 15:02:13.996376 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vkrk5\" (UniqueName: \"kubernetes.io/projected/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-kube-api-access-vkrk5\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.000625 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.001323 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.008149 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-1\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.014704 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-external-api-2\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.043999 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.059706 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.077677 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-run\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.077758 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.077796 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-dev\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.077870 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.077907 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.077935 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.077954 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-scripts\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.077964 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-run\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.077990 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95acd323-0a11-4e25-8439-f848c8811df5-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078042 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078084 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-run\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078095 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078400 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078122 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078488 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078489 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95acd323-0a11-4e25-8439-f848c8811df5-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078515 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-logs\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078301 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-run\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078156 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078165 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") device mount path \"/mnt/openstack/pv18\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078575 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078116 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") device mount path \"/mnt/openstack/pv16\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078270 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") device mount path \"/mnt/openstack/pv04\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078754 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-sys\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078809 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078863 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078905 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-config-data\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078931 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lfkw8\" (UniqueName: \"kubernetes.io/projected/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-kube-api-access-lfkw8\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078969 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vklgz\" (UniqueName: \"kubernetes.io/projected/95acd323-0a11-4e25-8439-f848c8811df5-kube-api-access-vklgz\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078998 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-dev\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.079035 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-logs\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.079056 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.079110 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-sys\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.079115 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.079137 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.079417 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-dev\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.078416 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-dev\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.079867 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95acd323-0a11-4e25-8439-f848c8811df5-config-data\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.079918 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.079947 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95acd323-0a11-4e25-8439-f848c8811df5-scripts\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.079983 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95acd323-0a11-4e25-8439-f848c8811df5-logs\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.080009 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.080034 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-sys\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.080135 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.080273 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.080317 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.083254 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-scripts\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.083301 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-sys\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.083316 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.083665 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.084486 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95acd323-0a11-4e25-8439-f848c8811df5-logs\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.088956 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-config-data\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.088984 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95acd323-0a11-4e25-8439-f848c8811df5-config-data\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.096909 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95acd323-0a11-4e25-8439-f848c8811df5-scripts\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.101597 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfkw8\" (UniqueName: \"kubernetes.io/projected/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-kube-api-access-lfkw8\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.103730 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.105637 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.109176 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vklgz\" (UniqueName: \"kubernetes.io/projected/95acd323-0a11-4e25-8439-f848c8811df5-kube-api-access-vklgz\") pod \"glance-default-internal-api-1\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.110654 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.110830 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-2\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.132940 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.146820 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.509510 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.561151 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:02:14 crc kubenswrapper[4751]: W0131 15:02:14.563103 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod320d0141_d27c_4f4d_9527_ae0f4db2f4fe.slice/crio-bcdaf73c5ebfac54d35792da2a6c14709ac5d1c622bf7ee51e304db07c548663 WatchSource:0}: Error finding container bcdaf73c5ebfac54d35792da2a6c14709ac5d1c622bf7ee51e304db07c548663: Status 404 returned error can't find the container with id bcdaf73c5ebfac54d35792da2a6c14709ac5d1c622bf7ee51e304db07c548663 Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.626094 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 15:02:14 crc kubenswrapper[4751]: I0131 15:02:14.636013 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:02:14 crc kubenswrapper[4751]: W0131 15:02:14.636561 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod95acd323_0a11_4e25_8439_f848c8811df5.slice/crio-cf4c0a604cd5f16c5907f38582ee871fe39142f1bf81d14eb1c48a46548f4e79 WatchSource:0}: Error finding container cf4c0a604cd5f16c5907f38582ee871fe39142f1bf81d14eb1c48a46548f4e79: Status 404 returned error can't find the container with id cf4c0a604cd5f16c5907f38582ee871fe39142f1bf81d14eb1c48a46548f4e79 Jan 31 15:02:14 crc kubenswrapper[4751]: W0131 15:02:14.657658 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod7ad25a0a_80c0_46fc_9eb7_c91e86c2d3ad.slice/crio-71e60331171172a0be59cf0f295e6f9b8b83fa45d5df8411886bce4a9f2a6d9c WatchSource:0}: Error finding container 71e60331171172a0be59cf0f295e6f9b8b83fa45d5df8411886bce4a9f2a6d9c: Status 404 returned error can't find the container with id 71e60331171172a0be59cf0f295e6f9b8b83fa45d5df8411886bce4a9f2a6d9c Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.458496 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"320d0141-d27c-4f4d-9527-ae0f4db2f4fe","Type":"ContainerStarted","Data":"6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233"} Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.458950 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"320d0141-d27c-4f4d-9527-ae0f4db2f4fe","Type":"ContainerStarted","Data":"682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b"} Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.458961 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"320d0141-d27c-4f4d-9527-ae0f4db2f4fe","Type":"ContainerStarted","Data":"6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe"} Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.458970 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"320d0141-d27c-4f4d-9527-ae0f4db2f4fe","Type":"ContainerStarted","Data":"bcdaf73c5ebfac54d35792da2a6c14709ac5d1c622bf7ee51e304db07c548663"} Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.461946 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"95acd323-0a11-4e25-8439-f848c8811df5","Type":"ContainerStarted","Data":"1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8"} Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.461970 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"95acd323-0a11-4e25-8439-f848c8811df5","Type":"ContainerStarted","Data":"437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2"} Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.461980 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"95acd323-0a11-4e25-8439-f848c8811df5","Type":"ContainerStarted","Data":"cf4c0a604cd5f16c5907f38582ee871fe39142f1bf81d14eb1c48a46548f4e79"} Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.466207 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"6a459e47-85a7-4f4d-84ba-a7d3e01180dc","Type":"ContainerStarted","Data":"780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a"} Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.466235 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"6a459e47-85a7-4f4d-84ba-a7d3e01180dc","Type":"ContainerStarted","Data":"ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1"} Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.466244 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"6a459e47-85a7-4f4d-84ba-a7d3e01180dc","Type":"ContainerStarted","Data":"dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9"} Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.466253 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"6a459e47-85a7-4f4d-84ba-a7d3e01180dc","Type":"ContainerStarted","Data":"d8672d8a656b9f58508baa22372a3b5bcd5f2f26025dd43a8c5d2f9ca074eb76"} Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.476215 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad","Type":"ContainerStarted","Data":"b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8"} Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.476263 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad","Type":"ContainerStarted","Data":"06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90"} Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.476280 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad","Type":"ContainerStarted","Data":"71e60331171172a0be59cf0f295e6f9b8b83fa45d5df8411886bce4a9f2a6d9c"} Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.486964 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-1" podStartSLOduration=3.486941399 podStartE2EDuration="3.486941399s" podCreationTimestamp="2026-01-31 15:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:02:15.484438663 +0000 UTC m=+1239.859151548" watchObservedRunningTime="2026-01-31 15:02:15.486941399 +0000 UTC m=+1239.861654284" Jan 31 15:02:15 crc kubenswrapper[4751]: I0131 15:02:15.513377 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-2" podStartSLOduration=3.513359105 podStartE2EDuration="3.513359105s" podCreationTimestamp="2026-01-31 15:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:02:15.509526604 +0000 UTC m=+1239.884239489" watchObservedRunningTime="2026-01-31 15:02:15.513359105 +0000 UTC m=+1239.888071990" Jan 31 15:02:16 crc kubenswrapper[4751]: I0131 15:02:16.486057 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad","Type":"ContainerStarted","Data":"a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b"} Jan 31 15:02:16 crc kubenswrapper[4751]: I0131 15:02:16.490231 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"95acd323-0a11-4e25-8439-f848c8811df5","Type":"ContainerStarted","Data":"14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf"} Jan 31 15:02:16 crc kubenswrapper[4751]: I0131 15:02:16.534355 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-2" podStartSLOduration=4.534335358 podStartE2EDuration="4.534335358s" podCreationTimestamp="2026-01-31 15:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:02:16.52985317 +0000 UTC m=+1240.904566055" watchObservedRunningTime="2026-01-31 15:02:16.534335358 +0000 UTC m=+1240.909048253" Jan 31 15:02:16 crc kubenswrapper[4751]: I0131 15:02:16.564665 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=4.5646452360000005 podStartE2EDuration="4.564645236s" podCreationTimestamp="2026-01-31 15:02:12 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:02:16.563801414 +0000 UTC m=+1240.938514319" watchObservedRunningTime="2026-01-31 15:02:16.564645236 +0000 UTC m=+1240.939358121" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.044330 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.046389 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.046500 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.060792 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.060841 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.060852 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.074241 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.084809 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.087391 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.090876 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.091279 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.119093 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.134226 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.134276 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.134287 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.149542 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.149587 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.149636 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.160788 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.163256 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.174280 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.176196 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.179286 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.193329 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.549005 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.550301 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.550347 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.550368 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.550386 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.550403 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.550419 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.550488 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.550507 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.550521 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.550537 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.550553 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.562637 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.563511 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.566211 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.566282 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.567152 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.568242 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.569012 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.570395 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.571572 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.572764 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.576595 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:24 crc kubenswrapper[4751]: I0131 15:02:24.580399 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:25 crc kubenswrapper[4751]: I0131 15:02:25.295864 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 15:02:25 crc kubenswrapper[4751]: I0131 15:02:25.310054 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:02:25 crc kubenswrapper[4751]: I0131 15:02:25.500986 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 15:02:25 crc kubenswrapper[4751]: I0131 15:02:25.509336 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:02:26 crc kubenswrapper[4751]: I0131 15:02:26.561539 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" containerName="glance-log" containerID="cri-o://6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe" gracePeriod=30 Jan 31 15:02:26 crc kubenswrapper[4751]: I0131 15:02:26.561625 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" containerName="glance-api" containerID="cri-o://6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233" gracePeriod=30 Jan 31 15:02:26 crc kubenswrapper[4751]: I0131 15:02:26.561637 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" containerName="glance-httpd" containerID="cri-o://682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b" gracePeriod=30 Jan 31 15:02:26 crc kubenswrapper[4751]: I0131 15:02:26.561731 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" containerName="glance-log" containerID="cri-o://dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9" gracePeriod=30 Jan 31 15:02:26 crc kubenswrapper[4751]: I0131 15:02:26.561787 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" containerName="glance-api" containerID="cri-o://780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a" gracePeriod=30 Jan 31 15:02:26 crc kubenswrapper[4751]: I0131 15:02:26.561802 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" containerName="glance-httpd" containerID="cri-o://ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1" gracePeriod=30 Jan 31 15:02:26 crc kubenswrapper[4751]: I0131 15:02:26.562039 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="95acd323-0a11-4e25-8439-f848c8811df5" containerName="glance-log" containerID="cri-o://437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2" gracePeriod=30 Jan 31 15:02:26 crc kubenswrapper[4751]: I0131 15:02:26.562056 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="95acd323-0a11-4e25-8439-f848c8811df5" containerName="glance-api" containerID="cri-o://14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf" gracePeriod=30 Jan 31 15:02:26 crc kubenswrapper[4751]: I0131 15:02:26.562081 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="95acd323-0a11-4e25-8439-f848c8811df5" containerName="glance-httpd" containerID="cri-o://1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8" gracePeriod=30 Jan 31 15:02:26 crc kubenswrapper[4751]: I0131 15:02:26.562192 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" containerName="glance-log" containerID="cri-o://06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90" gracePeriod=30 Jan 31 15:02:26 crc kubenswrapper[4751]: I0131 15:02:26.562252 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" containerName="glance-api" containerID="cri-o://a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b" gracePeriod=30 Jan 31 15:02:26 crc kubenswrapper[4751]: I0131 15:02:26.562266 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" containerName="glance-httpd" containerID="cri-o://b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8" gracePeriod=30 Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.366550 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.446661 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-scripts\") pod \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.446765 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-etc-nvme\") pod \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.446799 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.446830 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-etc-iscsi\") pod \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447119 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-var-locks-brick\") pod \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447155 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-sys\") pod \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447221 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-dev\") pod \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447252 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-lib-modules\") pod \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447342 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-config-data\") pod \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447350 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" (UID: "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447388 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-run\") pod \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447462 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-run" (OuterVolumeSpecName: "run") pod "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" (UID: "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447526 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" (UID: "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447549 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-httpd-run\") pod \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447561 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-sys" (OuterVolumeSpecName: "sys") pod "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" (UID: "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447595 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-dev" (OuterVolumeSpecName: "dev") pod "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" (UID: "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447620 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-logs\") pod \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447628 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" (UID: "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447674 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfkw8\" (UniqueName: \"kubernetes.io/projected/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-kube-api-access-lfkw8\") pod \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.447727 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\" (UID: \"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.448977 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-logs" (OuterVolumeSpecName: "logs") pod "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" (UID: "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.449208 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" (UID: "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.449903 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.449941 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.449952 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.449962 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.449971 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.449982 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.449992 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.450002 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.450236 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" (UID: "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.458243 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance-cache") pod "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" (UID: "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.458676 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-scripts" (OuterVolumeSpecName: "scripts") pod "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" (UID: "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.458751 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-kube-api-access-lfkw8" (OuterVolumeSpecName: "kube-api-access-lfkw8") pod "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" (UID: "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad"). InnerVolumeSpecName "kube-api-access-lfkw8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.462194 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" (UID: "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.489527 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.501098 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.507213 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.550816 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vkrk5\" (UniqueName: \"kubernetes.io/projected/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-kube-api-access-vkrk5\") pod \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.551064 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-config-data\") pod \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.551193 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-etc-iscsi\") pod \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.551283 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "6a459e47-85a7-4f4d-84ba-a7d3e01180dc" (UID: "6a459e47-85a7-4f4d-84ba-a7d3e01180dc"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.551410 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-var-locks-brick\") pod \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.551432 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "6a459e47-85a7-4f4d-84ba-a7d3e01180dc" (UID: "6a459e47-85a7-4f4d-84ba-a7d3e01180dc"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.551763 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-sys\") pod \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.551962 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-lib-modules\") pod \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.552348 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.552463 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-etc-nvme\") pod \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.552579 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-httpd-run\") pod \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.552688 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-run\") pod \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.552820 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-dev\") pod \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.552931 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-logs\") pod \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.553046 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-scripts\") pod \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.553151 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\" (UID: \"6a459e47-85a7-4f4d-84ba-a7d3e01180dc\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.553723 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.553811 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lfkw8\" (UniqueName: \"kubernetes.io/projected/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-kube-api-access-lfkw8\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.553884 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.553966 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.554036 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.554125 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.554203 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.551895 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-sys" (OuterVolumeSpecName: "sys") pod "6a459e47-85a7-4f4d-84ba-a7d3e01180dc" (UID: "6a459e47-85a7-4f4d-84ba-a7d3e01180dc"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.552283 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "6a459e47-85a7-4f4d-84ba-a7d3e01180dc" (UID: "6a459e47-85a7-4f4d-84ba-a7d3e01180dc"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.556275 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-dev" (OuterVolumeSpecName: "dev") pod "6a459e47-85a7-4f4d-84ba-a7d3e01180dc" (UID: "6a459e47-85a7-4f4d-84ba-a7d3e01180dc"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.556314 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "6a459e47-85a7-4f4d-84ba-a7d3e01180dc" (UID: "6a459e47-85a7-4f4d-84ba-a7d3e01180dc"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.557042 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "6a459e47-85a7-4f4d-84ba-a7d3e01180dc" (UID: "6a459e47-85a7-4f4d-84ba-a7d3e01180dc"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.557061 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-logs" (OuterVolumeSpecName: "logs") pod "6a459e47-85a7-4f4d-84ba-a7d3e01180dc" (UID: "6a459e47-85a7-4f4d-84ba-a7d3e01180dc"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.558300 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-scripts" (OuterVolumeSpecName: "scripts") pod "6a459e47-85a7-4f4d-84ba-a7d3e01180dc" (UID: "6a459e47-85a7-4f4d-84ba-a7d3e01180dc"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.564020 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-run" (OuterVolumeSpecName: "run") pod "6a459e47-85a7-4f4d-84ba-a7d3e01180dc" (UID: "6a459e47-85a7-4f4d-84ba-a7d3e01180dc"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.564362 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-kube-api-access-vkrk5" (OuterVolumeSpecName: "kube-api-access-vkrk5") pod "6a459e47-85a7-4f4d-84ba-a7d3e01180dc" (UID: "6a459e47-85a7-4f4d-84ba-a7d3e01180dc"). InnerVolumeSpecName "kube-api-access-vkrk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.575481 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage15-crc" (OuterVolumeSpecName: "glance") pod "6a459e47-85a7-4f4d-84ba-a7d3e01180dc" (UID: "6a459e47-85a7-4f4d-84ba-a7d3e01180dc"). InnerVolumeSpecName "local-storage15-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.575681 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.579156 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.584038 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance-cache") pod "6a459e47-85a7-4f4d-84ba-a7d3e01180dc" (UID: "6a459e47-85a7-4f4d-84ba-a7d3e01180dc"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.588236 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-config-data" (OuterVolumeSpecName: "config-data") pod "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" (UID: "7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.597717 4751 generic.go:334] "Generic (PLEG): container finished" podID="95acd323-0a11-4e25-8439-f848c8811df5" containerID="14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf" exitCode=0 Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.597751 4751 generic.go:334] "Generic (PLEG): container finished" podID="95acd323-0a11-4e25-8439-f848c8811df5" containerID="1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8" exitCode=0 Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.597762 4751 generic.go:334] "Generic (PLEG): container finished" podID="95acd323-0a11-4e25-8439-f848c8811df5" containerID="437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2" exitCode=143 Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.597812 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"95acd323-0a11-4e25-8439-f848c8811df5","Type":"ContainerDied","Data":"14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.597843 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"95acd323-0a11-4e25-8439-f848c8811df5","Type":"ContainerDied","Data":"1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.597856 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"95acd323-0a11-4e25-8439-f848c8811df5","Type":"ContainerDied","Data":"437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.597871 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"95acd323-0a11-4e25-8439-f848c8811df5","Type":"ContainerDied","Data":"cf4c0a604cd5f16c5907f38582ee871fe39142f1bf81d14eb1c48a46548f4e79"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.597889 4751 scope.go:117] "RemoveContainer" containerID="14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.598030 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.608498 4751 generic.go:334] "Generic (PLEG): container finished" podID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" containerID="780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a" exitCode=0 Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.608533 4751 generic.go:334] "Generic (PLEG): container finished" podID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" containerID="ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1" exitCode=0 Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.608542 4751 generic.go:334] "Generic (PLEG): container finished" podID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" containerID="dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9" exitCode=143 Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.608616 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"6a459e47-85a7-4f4d-84ba-a7d3e01180dc","Type":"ContainerDied","Data":"780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.608650 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"6a459e47-85a7-4f4d-84ba-a7d3e01180dc","Type":"ContainerDied","Data":"ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.608663 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"6a459e47-85a7-4f4d-84ba-a7d3e01180dc","Type":"ContainerDied","Data":"dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.608676 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"6a459e47-85a7-4f4d-84ba-a7d3e01180dc","Type":"ContainerDied","Data":"d8672d8a656b9f58508baa22372a3b5bcd5f2f26025dd43a8c5d2f9ca074eb76"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.608750 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.619808 4751 generic.go:334] "Generic (PLEG): container finished" podID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" containerID="a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b" exitCode=0 Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.619866 4751 generic.go:334] "Generic (PLEG): container finished" podID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" containerID="b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8" exitCode=0 Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.619861 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad","Type":"ContainerDied","Data":"a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.619895 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.619913 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad","Type":"ContainerDied","Data":"b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.619925 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad","Type":"ContainerDied","Data":"06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.619877 4751 generic.go:334] "Generic (PLEG): container finished" podID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" containerID="06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90" exitCode=143 Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.619976 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad","Type":"ContainerDied","Data":"71e60331171172a0be59cf0f295e6f9b8b83fa45d5df8411886bce4a9f2a6d9c"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.626706 4751 generic.go:334] "Generic (PLEG): container finished" podID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" containerID="6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233" exitCode=0 Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.626741 4751 generic.go:334] "Generic (PLEG): container finished" podID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" containerID="682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b" exitCode=0 Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.626749 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.626766 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"320d0141-d27c-4f4d-9527-ae0f4db2f4fe","Type":"ContainerDied","Data":"6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.626799 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"320d0141-d27c-4f4d-9527-ae0f4db2f4fe","Type":"ContainerDied","Data":"682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.626811 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"320d0141-d27c-4f4d-9527-ae0f4db2f4fe","Type":"ContainerDied","Data":"6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.626751 4751 generic.go:334] "Generic (PLEG): container finished" podID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" containerID="6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe" exitCode=143 Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.626901 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"320d0141-d27c-4f4d-9527-ae0f4db2f4fe","Type":"ContainerDied","Data":"bcdaf73c5ebfac54d35792da2a6c14709ac5d1c622bf7ee51e304db07c548663"} Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.634778 4751 scope.go:117] "RemoveContainer" containerID="1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.662440 4751 scope.go:117] "RemoveContainer" containerID="437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.662905 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-etc-nvme\") pod \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.662939 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-scripts\") pod \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.662972 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-dev\") pod \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.662991 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95acd323-0a11-4e25-8439-f848c8811df5-httpd-run\") pod \"95acd323-0a11-4e25-8439-f848c8811df5\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663038 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-httpd-run\") pod \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663103 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-run\") pod \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663132 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95acd323-0a11-4e25-8439-f848c8811df5-config-data\") pod \"95acd323-0a11-4e25-8439-f848c8811df5\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663177 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-sys\") pod \"95acd323-0a11-4e25-8439-f848c8811df5\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663227 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-dev\") pod \"95acd323-0a11-4e25-8439-f848c8811df5\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663253 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t25kc\" (UniqueName: \"kubernetes.io/projected/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-kube-api-access-t25kc\") pod \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663289 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-var-locks-brick\") pod \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663334 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-run\") pod \"95acd323-0a11-4e25-8439-f848c8811df5\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663358 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-etc-iscsi\") pod \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663376 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-var-locks-brick\") pod \"95acd323-0a11-4e25-8439-f848c8811df5\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663410 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95acd323-0a11-4e25-8439-f848c8811df5-scripts\") pod \"95acd323-0a11-4e25-8439-f848c8811df5\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663441 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663470 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-logs\") pod \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663498 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-lib-modules\") pod \"95acd323-0a11-4e25-8439-f848c8811df5\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663517 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"95acd323-0a11-4e25-8439-f848c8811df5\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663540 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-config-data\") pod \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663560 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-etc-iscsi\") pod \"95acd323-0a11-4e25-8439-f848c8811df5\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663580 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-lib-modules\") pod \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663602 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95acd323-0a11-4e25-8439-f848c8811df5-logs\") pod \"95acd323-0a11-4e25-8439-f848c8811df5\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663649 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-sys\") pod \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663670 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\" (UID: \"320d0141-d27c-4f4d-9527-ae0f4db2f4fe\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663689 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-etc-nvme\") pod \"95acd323-0a11-4e25-8439-f848c8811df5\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663708 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"95acd323-0a11-4e25-8439-f848c8811df5\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.663745 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vklgz\" (UniqueName: \"kubernetes.io/projected/95acd323-0a11-4e25-8439-f848c8811df5-kube-api-access-vklgz\") pod \"95acd323-0a11-4e25-8439-f848c8811df5\" (UID: \"95acd323-0a11-4e25-8439-f848c8811df5\") " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.664089 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.664114 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.664128 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vkrk5\" (UniqueName: \"kubernetes.io/projected/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-kube-api-access-vkrk5\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.664142 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.664153 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.664164 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.664176 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.664190 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.664201 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.664221 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.664234 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.664245 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.664255 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.664266 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.668031 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-config-data" (OuterVolumeSpecName: "config-data") pod "6a459e47-85a7-4f4d-84ba-a7d3e01180dc" (UID: "6a459e47-85a7-4f4d-84ba-a7d3e01180dc"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.668872 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-dev" (OuterVolumeSpecName: "dev") pod "320d0141-d27c-4f4d-9527-ae0f4db2f4fe" (UID: "320d0141-d27c-4f4d-9527-ae0f4db2f4fe"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.668929 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "320d0141-d27c-4f4d-9527-ae0f4db2f4fe" (UID: "320d0141-d27c-4f4d-9527-ae0f4db2f4fe"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.669062 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95acd323-0a11-4e25-8439-f848c8811df5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "95acd323-0a11-4e25-8439-f848c8811df5" (UID: "95acd323-0a11-4e25-8439-f848c8811df5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.669302 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "320d0141-d27c-4f4d-9527-ae0f4db2f4fe" (UID: "320d0141-d27c-4f4d-9527-ae0f4db2f4fe"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.669332 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-run" (OuterVolumeSpecName: "run") pod "320d0141-d27c-4f4d-9527-ae0f4db2f4fe" (UID: "320d0141-d27c-4f4d-9527-ae0f4db2f4fe"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.669356 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "320d0141-d27c-4f4d-9527-ae0f4db2f4fe" (UID: "320d0141-d27c-4f4d-9527-ae0f4db2f4fe"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.669373 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-sys" (OuterVolumeSpecName: "sys") pod "95acd323-0a11-4e25-8439-f848c8811df5" (UID: "95acd323-0a11-4e25-8439-f848c8811df5"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.669390 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-dev" (OuterVolumeSpecName: "dev") pod "95acd323-0a11-4e25-8439-f848c8811df5" (UID: "95acd323-0a11-4e25-8439-f848c8811df5"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.670055 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.671133 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "320d0141-d27c-4f4d-9527-ae0f4db2f4fe" (UID: "320d0141-d27c-4f4d-9527-ae0f4db2f4fe"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.671182 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "320d0141-d27c-4f4d-9527-ae0f4db2f4fe" (UID: "320d0141-d27c-4f4d-9527-ae0f4db2f4fe"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.671220 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-run" (OuterVolumeSpecName: "run") pod "95acd323-0a11-4e25-8439-f848c8811df5" (UID: "95acd323-0a11-4e25-8439-f848c8811df5"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.671254 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "95acd323-0a11-4e25-8439-f848c8811df5" (UID: "95acd323-0a11-4e25-8439-f848c8811df5"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.671490 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/95acd323-0a11-4e25-8439-f848c8811df5-logs" (OuterVolumeSpecName: "logs") pod "95acd323-0a11-4e25-8439-f848c8811df5" (UID: "95acd323-0a11-4e25-8439-f848c8811df5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.671490 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95acd323-0a11-4e25-8439-f848c8811df5-scripts" (OuterVolumeSpecName: "scripts") pod "95acd323-0a11-4e25-8439-f848c8811df5" (UID: "95acd323-0a11-4e25-8439-f848c8811df5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.671512 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "95acd323-0a11-4e25-8439-f848c8811df5" (UID: "95acd323-0a11-4e25-8439-f848c8811df5"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.671533 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-sys" (OuterVolumeSpecName: "sys") pod "320d0141-d27c-4f4d-9527-ae0f4db2f4fe" (UID: "320d0141-d27c-4f4d-9527-ae0f4db2f4fe"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.672116 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "95acd323-0a11-4e25-8439-f848c8811df5" (UID: "95acd323-0a11-4e25-8439-f848c8811df5"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.672128 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "95acd323-0a11-4e25-8439-f848c8811df5" (UID: "95acd323-0a11-4e25-8439-f848c8811df5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.672201 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-logs" (OuterVolumeSpecName: "logs") pod "320d0141-d27c-4f4d-9527-ae0f4db2f4fe" (UID: "320d0141-d27c-4f4d-9527-ae0f4db2f4fe"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.673144 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-scripts" (OuterVolumeSpecName: "scripts") pod "320d0141-d27c-4f4d-9527-ae0f4db2f4fe" (UID: "320d0141-d27c-4f4d-9527-ae0f4db2f4fe"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.673316 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-kube-api-access-t25kc" (OuterVolumeSpecName: "kube-api-access-t25kc") pod "320d0141-d27c-4f4d-9527-ae0f4db2f4fe" (UID: "320d0141-d27c-4f4d-9527-ae0f4db2f4fe"). InnerVolumeSpecName "kube-api-access-t25kc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.674585 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.674890 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance") pod "320d0141-d27c-4f4d-9527-ae0f4db2f4fe" (UID: "320d0141-d27c-4f4d-9527-ae0f4db2f4fe"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.676141 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95acd323-0a11-4e25-8439-f848c8811df5-kube-api-access-vklgz" (OuterVolumeSpecName: "kube-api-access-vklgz") pod "95acd323-0a11-4e25-8439-f848c8811df5" (UID: "95acd323-0a11-4e25-8439-f848c8811df5"). InnerVolumeSpecName "kube-api-access-vklgz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.678292 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage16-crc" (OuterVolumeSpecName: "glance") pod "95acd323-0a11-4e25-8439-f848c8811df5" (UID: "95acd323-0a11-4e25-8439-f848c8811df5"). InnerVolumeSpecName "local-storage16-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.679264 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance-cache") pod "320d0141-d27c-4f4d-9527-ae0f4db2f4fe" (UID: "320d0141-d27c-4f4d-9527-ae0f4db2f4fe"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.679761 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage18-crc" (OuterVolumeSpecName: "glance-cache") pod "95acd323-0a11-4e25-8439-f848c8811df5" (UID: "95acd323-0a11-4e25-8439-f848c8811df5"). InnerVolumeSpecName "local-storage18-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.728645 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage15-crc" (UniqueName: "kubernetes.io/local-volume/local-storage15-crc") on node "crc" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.737525 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765321 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765356 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765365 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765374 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-t25kc\" (UniqueName: \"kubernetes.io/projected/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-kube-api-access-t25kc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765383 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765392 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765400 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765408 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765416 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765427 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/95acd323-0a11-4e25-8439-f848c8811df5-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765467 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765475 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765484 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765503 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765511 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765520 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765528 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/95acd323-0a11-4e25-8439-f848c8811df5-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765538 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765545 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765558 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765566 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/95acd323-0a11-4e25-8439-f848c8811df5-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765578 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" " Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765586 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/6a459e47-85a7-4f4d-84ba-a7d3e01180dc-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765595 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vklgz\" (UniqueName: \"kubernetes.io/projected/95acd323-0a11-4e25-8439-f848c8811df5-kube-api-access-vklgz\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765603 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765612 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765619 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765627 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/95acd323-0a11-4e25-8439-f848c8811df5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.765635 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.782695 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-config-data" (OuterVolumeSpecName: "config-data") pod "320d0141-d27c-4f4d-9527-ae0f4db2f4fe" (UID: "320d0141-d27c-4f4d-9527-ae0f4db2f4fe"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.787189 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage16-crc" (UniqueName: "kubernetes.io/local-volume/local-storage16-crc") on node "crc" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.787642 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.789634 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.789983 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95acd323-0a11-4e25-8439-f848c8811df5-config-data" (OuterVolumeSpecName: "config-data") pod "95acd323-0a11-4e25-8439-f848c8811df5" (UID: "95acd323-0a11-4e25-8439-f848c8811df5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.795386 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage18-crc" (UniqueName: "kubernetes.io/local-volume/local-storage18-crc") on node "crc" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.845410 4751 scope.go:117] "RemoveContainer" containerID="14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf" Jan 31 15:02:27 crc kubenswrapper[4751]: E0131 15:02:27.845890 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf\": container with ID starting with 14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf not found: ID does not exist" containerID="14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.845923 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf"} err="failed to get container status \"14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf\": rpc error: code = NotFound desc = could not find container \"14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf\": container with ID starting with 14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.845944 4751 scope.go:117] "RemoveContainer" containerID="1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8" Jan 31 15:02:27 crc kubenswrapper[4751]: E0131 15:02:27.846217 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8\": container with ID starting with 1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8 not found: ID does not exist" containerID="1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.846243 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8"} err="failed to get container status \"1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8\": rpc error: code = NotFound desc = could not find container \"1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8\": container with ID starting with 1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.846255 4751 scope.go:117] "RemoveContainer" containerID="437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2" Jan 31 15:02:27 crc kubenswrapper[4751]: E0131 15:02:27.846444 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2\": container with ID starting with 437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2 not found: ID does not exist" containerID="437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.846513 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2"} err="failed to get container status \"437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2\": rpc error: code = NotFound desc = could not find container \"437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2\": container with ID starting with 437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.846531 4751 scope.go:117] "RemoveContainer" containerID="14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.846824 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf"} err="failed to get container status \"14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf\": rpc error: code = NotFound desc = could not find container \"14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf\": container with ID starting with 14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.846848 4751 scope.go:117] "RemoveContainer" containerID="1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.847160 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8"} err="failed to get container status \"1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8\": rpc error: code = NotFound desc = could not find container \"1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8\": container with ID starting with 1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.847181 4751 scope.go:117] "RemoveContainer" containerID="437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.847387 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2"} err="failed to get container status \"437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2\": rpc error: code = NotFound desc = could not find container \"437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2\": container with ID starting with 437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.847414 4751 scope.go:117] "RemoveContainer" containerID="14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.847603 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf"} err="failed to get container status \"14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf\": rpc error: code = NotFound desc = could not find container \"14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf\": container with ID starting with 14cacda155a9f23ba59e4f2a3da7afceb0ef3ab9efee85c5bb5e5e8c246a77bf not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.847621 4751 scope.go:117] "RemoveContainer" containerID="1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.847804 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8"} err="failed to get container status \"1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8\": rpc error: code = NotFound desc = could not find container \"1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8\": container with ID starting with 1604ddf440b351b315839a0e9505346fa56959eb8d95f8bc4383bea7078e51c8 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.847824 4751 scope.go:117] "RemoveContainer" containerID="437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.847992 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2"} err="failed to get container status \"437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2\": rpc error: code = NotFound desc = could not find container \"437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2\": container with ID starting with 437b68b7d490ec3c812b65b9b5b235cb5bc63fbc0448ab8d751106a1f12c81b2 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.848018 4751 scope.go:117] "RemoveContainer" containerID="780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.864787 4751 scope.go:117] "RemoveContainer" containerID="ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.866668 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.866694 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.866705 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/95acd323-0a11-4e25-8439-f848c8811df5-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.866718 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.866729 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.866739 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/320d0141-d27c-4f4d-9527-ae0f4db2f4fe-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.882952 4751 scope.go:117] "RemoveContainer" containerID="dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.901456 4751 scope.go:117] "RemoveContainer" containerID="780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a" Jan 31 15:02:27 crc kubenswrapper[4751]: E0131 15:02:27.902001 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a\": container with ID starting with 780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a not found: ID does not exist" containerID="780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.902056 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a"} err="failed to get container status \"780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a\": rpc error: code = NotFound desc = could not find container \"780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a\": container with ID starting with 780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.902107 4751 scope.go:117] "RemoveContainer" containerID="ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1" Jan 31 15:02:27 crc kubenswrapper[4751]: E0131 15:02:27.902538 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1\": container with ID starting with ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1 not found: ID does not exist" containerID="ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.902577 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1"} err="failed to get container status \"ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1\": rpc error: code = NotFound desc = could not find container \"ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1\": container with ID starting with ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.902605 4751 scope.go:117] "RemoveContainer" containerID="dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9" Jan 31 15:02:27 crc kubenswrapper[4751]: E0131 15:02:27.903296 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9\": container with ID starting with dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9 not found: ID does not exist" containerID="dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.903327 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9"} err="failed to get container status \"dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9\": rpc error: code = NotFound desc = could not find container \"dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9\": container with ID starting with dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.903348 4751 scope.go:117] "RemoveContainer" containerID="780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.903799 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a"} err="failed to get container status \"780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a\": rpc error: code = NotFound desc = could not find container \"780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a\": container with ID starting with 780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.903826 4751 scope.go:117] "RemoveContainer" containerID="ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.904245 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1"} err="failed to get container status \"ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1\": rpc error: code = NotFound desc = could not find container \"ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1\": container with ID starting with ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.904264 4751 scope.go:117] "RemoveContainer" containerID="dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.904522 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9"} err="failed to get container status \"dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9\": rpc error: code = NotFound desc = could not find container \"dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9\": container with ID starting with dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.904537 4751 scope.go:117] "RemoveContainer" containerID="780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.904750 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a"} err="failed to get container status \"780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a\": rpc error: code = NotFound desc = could not find container \"780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a\": container with ID starting with 780c4a5951b3f68e86416bac67962e1bcf96fd06498cc8fffeda627b35ea2d9a not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.904764 4751 scope.go:117] "RemoveContainer" containerID="ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.904982 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1"} err="failed to get container status \"ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1\": rpc error: code = NotFound desc = could not find container \"ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1\": container with ID starting with ad949de535e978e687a00582bdf68a90265d128f5c20313eaa241f804c6802e1 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.905024 4751 scope.go:117] "RemoveContainer" containerID="dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.905292 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9"} err="failed to get container status \"dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9\": rpc error: code = NotFound desc = could not find container \"dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9\": container with ID starting with dfe4c141e34d71abd0cb1dc9d7e5475327eb2788a53a9e9597a4f06bd94deca9 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.905308 4751 scope.go:117] "RemoveContainer" containerID="a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.936285 4751 scope.go:117] "RemoveContainer" containerID="b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.936603 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.942270 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.959851 4751 scope.go:117] "RemoveContainer" containerID="06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.972153 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.978629 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.986895 4751 scope.go:117] "RemoveContainer" containerID="a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b" Jan 31 15:02:27 crc kubenswrapper[4751]: E0131 15:02:27.987410 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b\": container with ID starting with a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b not found: ID does not exist" containerID="a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.987443 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b"} err="failed to get container status \"a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b\": rpc error: code = NotFound desc = could not find container \"a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b\": container with ID starting with a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.987466 4751 scope.go:117] "RemoveContainer" containerID="b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8" Jan 31 15:02:27 crc kubenswrapper[4751]: E0131 15:02:27.987730 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8\": container with ID starting with b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8 not found: ID does not exist" containerID="b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.987755 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8"} err="failed to get container status \"b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8\": rpc error: code = NotFound desc = could not find container \"b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8\": container with ID starting with b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.987771 4751 scope.go:117] "RemoveContainer" containerID="06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90" Jan 31 15:02:27 crc kubenswrapper[4751]: E0131 15:02:27.988246 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90\": container with ID starting with 06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90 not found: ID does not exist" containerID="06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.988279 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90"} err="failed to get container status \"06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90\": rpc error: code = NotFound desc = could not find container \"06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90\": container with ID starting with 06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.988297 4751 scope.go:117] "RemoveContainer" containerID="a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.988516 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b"} err="failed to get container status \"a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b\": rpc error: code = NotFound desc = could not find container \"a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b\": container with ID starting with a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.988539 4751 scope.go:117] "RemoveContainer" containerID="b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.988805 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8"} err="failed to get container status \"b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8\": rpc error: code = NotFound desc = could not find container \"b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8\": container with ID starting with b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.988829 4751 scope.go:117] "RemoveContainer" containerID="06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.989220 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90"} err="failed to get container status \"06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90\": rpc error: code = NotFound desc = could not find container \"06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90\": container with ID starting with 06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.989243 4751 scope.go:117] "RemoveContainer" containerID="a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.989526 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b"} err="failed to get container status \"a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b\": rpc error: code = NotFound desc = could not find container \"a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b\": container with ID starting with a5e9523257d5d31be3474c9ebb00bf1150499e723b36a65c3a75e75e8cce0b9b not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.989552 4751 scope.go:117] "RemoveContainer" containerID="b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.989784 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8"} err="failed to get container status \"b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8\": rpc error: code = NotFound desc = could not find container \"b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8\": container with ID starting with b399921133ccb6ca2d797749e4080f6c6ff9b63570cdf4da4a13f5d0b74243c8 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.989812 4751 scope.go:117] "RemoveContainer" containerID="06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.990137 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90"} err="failed to get container status \"06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90\": rpc error: code = NotFound desc = could not find container \"06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90\": container with ID starting with 06745ce7f672d1fed8fdec0610aed76d08cb2a27019556cb1771945716a7dd90 not found: ID does not exist" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.990384 4751 scope.go:117] "RemoveContainer" containerID="6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233" Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.991586 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:02:27 crc kubenswrapper[4751]: I0131 15:02:27.998034 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.013354 4751 scope.go:117] "RemoveContainer" containerID="682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.038329 4751 scope.go:117] "RemoveContainer" containerID="6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.053868 4751 scope.go:117] "RemoveContainer" containerID="6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233" Jan 31 15:02:28 crc kubenswrapper[4751]: E0131 15:02:28.054288 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233\": container with ID starting with 6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233 not found: ID does not exist" containerID="6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.054334 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233"} err="failed to get container status \"6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233\": rpc error: code = NotFound desc = could not find container \"6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233\": container with ID starting with 6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233 not found: ID does not exist" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.054358 4751 scope.go:117] "RemoveContainer" containerID="682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b" Jan 31 15:02:28 crc kubenswrapper[4751]: E0131 15:02:28.054654 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b\": container with ID starting with 682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b not found: ID does not exist" containerID="682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.054675 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b"} err="failed to get container status \"682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b\": rpc error: code = NotFound desc = could not find container \"682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b\": container with ID starting with 682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b not found: ID does not exist" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.054708 4751 scope.go:117] "RemoveContainer" containerID="6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe" Jan 31 15:02:28 crc kubenswrapper[4751]: E0131 15:02:28.054973 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe\": container with ID starting with 6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe not found: ID does not exist" containerID="6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.054993 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe"} err="failed to get container status \"6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe\": rpc error: code = NotFound desc = could not find container \"6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe\": container with ID starting with 6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe not found: ID does not exist" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.055040 4751 scope.go:117] "RemoveContainer" containerID="6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.055236 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233"} err="failed to get container status \"6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233\": rpc error: code = NotFound desc = could not find container \"6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233\": container with ID starting with 6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233 not found: ID does not exist" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.055274 4751 scope.go:117] "RemoveContainer" containerID="682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.055466 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b"} err="failed to get container status \"682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b\": rpc error: code = NotFound desc = could not find container \"682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b\": container with ID starting with 682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b not found: ID does not exist" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.055482 4751 scope.go:117] "RemoveContainer" containerID="6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.055653 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe"} err="failed to get container status \"6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe\": rpc error: code = NotFound desc = could not find container \"6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe\": container with ID starting with 6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe not found: ID does not exist" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.055670 4751 scope.go:117] "RemoveContainer" containerID="6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.055985 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233"} err="failed to get container status \"6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233\": rpc error: code = NotFound desc = could not find container \"6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233\": container with ID starting with 6ede99f5fdfed08a9d7eac6a184d5660b106e725595d6c7cf676d14eefad9233 not found: ID does not exist" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.056002 4751 scope.go:117] "RemoveContainer" containerID="682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.056379 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b"} err="failed to get container status \"682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b\": rpc error: code = NotFound desc = could not find container \"682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b\": container with ID starting with 682e4ce2a6210e637ba5e4d2dbaf2ac342164e083e46f40cc551cb32e38ee04b not found: ID does not exist" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.056397 4751 scope.go:117] "RemoveContainer" containerID="6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.056661 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe"} err="failed to get container status \"6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe\": rpc error: code = NotFound desc = could not find container \"6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe\": container with ID starting with 6913cb3476dd5e0905008785656c005c7a92a66aba7f669cab05437045db35fe not found: ID does not exist" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.414272 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" path="/var/lib/kubelet/pods/320d0141-d27c-4f4d-9527-ae0f4db2f4fe/volumes" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.415051 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" path="/var/lib/kubelet/pods/6a459e47-85a7-4f4d-84ba-a7d3e01180dc/volumes" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.416192 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" path="/var/lib/kubelet/pods/7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad/volumes" Jan 31 15:02:28 crc kubenswrapper[4751]: I0131 15:02:28.416842 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95acd323-0a11-4e25-8439-f848c8811df5" path="/var/lib/kubelet/pods/95acd323-0a11-4e25-8439-f848c8811df5/volumes" Jan 31 15:02:29 crc kubenswrapper[4751]: I0131 15:02:29.160670 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:29 crc kubenswrapper[4751]: I0131 15:02:29.162000 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" containerName="glance-log" containerID="cri-o://cdf19d70be1f20e70e6253f8f6e27452c7be5e6f13392a31b245256551aa31c1" gracePeriod=30 Jan 31 15:02:29 crc kubenswrapper[4751]: I0131 15:02:29.162194 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" containerName="glance-httpd" containerID="cri-o://e332c13695ee9418872977980d66d846c226e037834cc39d0c88b742a39fc6a9" gracePeriod=30 Jan 31 15:02:29 crc kubenswrapper[4751]: I0131 15:02:29.162129 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" containerName="glance-api" containerID="cri-o://d00daa5873d01a2a52917e99a159d4dd523630cf9771ab415ea43dc1ab2768ec" gracePeriod=30 Jan 31 15:02:29 crc kubenswrapper[4751]: I0131 15:02:29.648589 4751 generic.go:334] "Generic (PLEG): container finished" podID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" containerID="d00daa5873d01a2a52917e99a159d4dd523630cf9771ab415ea43dc1ab2768ec" exitCode=0 Jan 31 15:02:29 crc kubenswrapper[4751]: I0131 15:02:29.648635 4751 generic.go:334] "Generic (PLEG): container finished" podID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" containerID="e332c13695ee9418872977980d66d846c226e037834cc39d0c88b742a39fc6a9" exitCode=0 Jan 31 15:02:29 crc kubenswrapper[4751]: I0131 15:02:29.648643 4751 generic.go:334] "Generic (PLEG): container finished" podID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" containerID="cdf19d70be1f20e70e6253f8f6e27452c7be5e6f13392a31b245256551aa31c1" exitCode=143 Jan 31 15:02:29 crc kubenswrapper[4751]: I0131 15:02:29.648672 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"f0b77b88-19a5-4bdc-87a1-6a65273226a2","Type":"ContainerDied","Data":"d00daa5873d01a2a52917e99a159d4dd523630cf9771ab415ea43dc1ab2768ec"} Jan 31 15:02:29 crc kubenswrapper[4751]: I0131 15:02:29.648722 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"f0b77b88-19a5-4bdc-87a1-6a65273226a2","Type":"ContainerDied","Data":"e332c13695ee9418872977980d66d846c226e037834cc39d0c88b742a39fc6a9"} Jan 31 15:02:29 crc kubenswrapper[4751]: I0131 15:02:29.648735 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"f0b77b88-19a5-4bdc-87a1-6a65273226a2","Type":"ContainerDied","Data":"cdf19d70be1f20e70e6253f8f6e27452c7be5e6f13392a31b245256551aa31c1"} Jan 31 15:02:29 crc kubenswrapper[4751]: I0131 15:02:29.713705 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:02:29 crc kubenswrapper[4751]: I0131 15:02:29.714258 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" containerName="glance-log" containerID="cri-o://88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8" gracePeriod=30 Jan 31 15:02:29 crc kubenswrapper[4751]: I0131 15:02:29.714326 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" containerName="glance-httpd" containerID="cri-o://b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12" gracePeriod=30 Jan 31 15:02:29 crc kubenswrapper[4751]: I0131 15:02:29.714332 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" containerName="glance-api" containerID="cri-o://979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee" gracePeriod=30 Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.034931 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.112539 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0b77b88-19a5-4bdc-87a1-6a65273226a2-logs\") pod \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.112587 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-sys\") pod \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.112613 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-etc-iscsi\") pod \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.112638 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b77b88-19a5-4bdc-87a1-6a65273226a2-config-data\") pod \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.112652 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-run\") pod \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.112679 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l8cqp\" (UniqueName: \"kubernetes.io/projected/f0b77b88-19a5-4bdc-87a1-6a65273226a2-kube-api-access-l8cqp\") pod \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.112711 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-lib-modules\") pod \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.112729 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0b77b88-19a5-4bdc-87a1-6a65273226a2-scripts\") pod \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.112748 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.112769 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.112787 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-var-locks-brick\") pod \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.112806 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-etc-nvme\") pod \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.112847 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-dev\") pod \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.112860 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0b77b88-19a5-4bdc-87a1-6a65273226a2-httpd-run\") pod \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\" (UID: \"f0b77b88-19a5-4bdc-87a1-6a65273226a2\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.113114 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f0b77b88-19a5-4bdc-87a1-6a65273226a2" (UID: "f0b77b88-19a5-4bdc-87a1-6a65273226a2"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.113177 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "f0b77b88-19a5-4bdc-87a1-6a65273226a2" (UID: "f0b77b88-19a5-4bdc-87a1-6a65273226a2"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.113218 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-run" (OuterVolumeSpecName: "run") pod "f0b77b88-19a5-4bdc-87a1-6a65273226a2" (UID: "f0b77b88-19a5-4bdc-87a1-6a65273226a2"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.113251 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-sys" (OuterVolumeSpecName: "sys") pod "f0b77b88-19a5-4bdc-87a1-6a65273226a2" (UID: "f0b77b88-19a5-4bdc-87a1-6a65273226a2"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.113419 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0b77b88-19a5-4bdc-87a1-6a65273226a2-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f0b77b88-19a5-4bdc-87a1-6a65273226a2" (UID: "f0b77b88-19a5-4bdc-87a1-6a65273226a2"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.113461 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "f0b77b88-19a5-4bdc-87a1-6a65273226a2" (UID: "f0b77b88-19a5-4bdc-87a1-6a65273226a2"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.113480 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "f0b77b88-19a5-4bdc-87a1-6a65273226a2" (UID: "f0b77b88-19a5-4bdc-87a1-6a65273226a2"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.113496 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-dev" (OuterVolumeSpecName: "dev") pod "f0b77b88-19a5-4bdc-87a1-6a65273226a2" (UID: "f0b77b88-19a5-4bdc-87a1-6a65273226a2"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.113519 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0b77b88-19a5-4bdc-87a1-6a65273226a2-logs" (OuterVolumeSpecName: "logs") pod "f0b77b88-19a5-4bdc-87a1-6a65273226a2" (UID: "f0b77b88-19a5-4bdc-87a1-6a65273226a2"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.119065 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance-cache") pod "f0b77b88-19a5-4bdc-87a1-6a65273226a2" (UID: "f0b77b88-19a5-4bdc-87a1-6a65273226a2"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.119181 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0b77b88-19a5-4bdc-87a1-6a65273226a2-kube-api-access-l8cqp" (OuterVolumeSpecName: "kube-api-access-l8cqp") pod "f0b77b88-19a5-4bdc-87a1-6a65273226a2" (UID: "f0b77b88-19a5-4bdc-87a1-6a65273226a2"). InnerVolumeSpecName "kube-api-access-l8cqp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.119233 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b77b88-19a5-4bdc-87a1-6a65273226a2-scripts" (OuterVolumeSpecName: "scripts") pod "f0b77b88-19a5-4bdc-87a1-6a65273226a2" (UID: "f0b77b88-19a5-4bdc-87a1-6a65273226a2"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.122279 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "f0b77b88-19a5-4bdc-87a1-6a65273226a2" (UID: "f0b77b88-19a5-4bdc-87a1-6a65273226a2"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.181247 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f0b77b88-19a5-4bdc-87a1-6a65273226a2-config-data" (OuterVolumeSpecName: "config-data") pod "f0b77b88-19a5-4bdc-87a1-6a65273226a2" (UID: "f0b77b88-19a5-4bdc-87a1-6a65273226a2"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.214239 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.214278 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.214295 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.214308 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.214321 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.214331 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f0b77b88-19a5-4bdc-87a1-6a65273226a2-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.214341 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f0b77b88-19a5-4bdc-87a1-6a65273226a2-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.214351 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.214360 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.214372 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f0b77b88-19a5-4bdc-87a1-6a65273226a2-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.214383 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.214394 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l8cqp\" (UniqueName: \"kubernetes.io/projected/f0b77b88-19a5-4bdc-87a1-6a65273226a2-kube-api-access-l8cqp\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.214404 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f0b77b88-19a5-4bdc-87a1-6a65273226a2-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.214414 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f0b77b88-19a5-4bdc-87a1-6a65273226a2-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.226272 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.229324 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.315906 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.315941 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.427784 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518213 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-dev\") pod \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518268 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-lib-modules\") pod \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518305 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-sys\") pod \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518356 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w868q\" (UniqueName: \"kubernetes.io/projected/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-kube-api-access-w868q\") pod \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518381 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-etc-iscsi\") pod \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518377 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-dev" (OuterVolumeSpecName: "dev") pod "5c99f5b1-8566-4141-9bd4-71a75e7f43b6" (UID: "5c99f5b1-8566-4141-9bd4-71a75e7f43b6"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518413 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-logs\") pod \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518413 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-sys" (OuterVolumeSpecName: "sys") pod "5c99f5b1-8566-4141-9bd4-71a75e7f43b6" (UID: "5c99f5b1-8566-4141-9bd4-71a75e7f43b6"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518408 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "5c99f5b1-8566-4141-9bd4-71a75e7f43b6" (UID: "5c99f5b1-8566-4141-9bd4-71a75e7f43b6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518449 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "5c99f5b1-8566-4141-9bd4-71a75e7f43b6" (UID: "5c99f5b1-8566-4141-9bd4-71a75e7f43b6"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518466 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-config-data\") pod \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518490 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518523 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-scripts\") pod \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518557 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518581 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-var-locks-brick\") pod \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518630 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-httpd-run\") pod \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518655 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-run\") pod \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518692 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-etc-nvme\") pod \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\" (UID: \"5c99f5b1-8566-4141-9bd4-71a75e7f43b6\") " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518874 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-logs" (OuterVolumeSpecName: "logs") pod "5c99f5b1-8566-4141-9bd4-71a75e7f43b6" (UID: "5c99f5b1-8566-4141-9bd4-71a75e7f43b6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.518940 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-run" (OuterVolumeSpecName: "run") pod "5c99f5b1-8566-4141-9bd4-71a75e7f43b6" (UID: "5c99f5b1-8566-4141-9bd4-71a75e7f43b6"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.519239 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "5c99f5b1-8566-4141-9bd4-71a75e7f43b6" (UID: "5c99f5b1-8566-4141-9bd4-71a75e7f43b6"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.519450 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.519484 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.519495 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.519504 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.519512 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.519520 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.519529 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.520349 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "5c99f5b1-8566-4141-9bd4-71a75e7f43b6" (UID: "5c99f5b1-8566-4141-9bd4-71a75e7f43b6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.522381 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage17-crc" (OuterVolumeSpecName: "glance-cache") pod "5c99f5b1-8566-4141-9bd4-71a75e7f43b6" (UID: "5c99f5b1-8566-4141-9bd4-71a75e7f43b6"). InnerVolumeSpecName "local-storage17-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.525540 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "5c99f5b1-8566-4141-9bd4-71a75e7f43b6" (UID: "5c99f5b1-8566-4141-9bd4-71a75e7f43b6"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.527267 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage20-crc" (OuterVolumeSpecName: "glance") pod "5c99f5b1-8566-4141-9bd4-71a75e7f43b6" (UID: "5c99f5b1-8566-4141-9bd4-71a75e7f43b6"). InnerVolumeSpecName "local-storage20-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.534288 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-scripts" (OuterVolumeSpecName: "scripts") pod "5c99f5b1-8566-4141-9bd4-71a75e7f43b6" (UID: "5c99f5b1-8566-4141-9bd4-71a75e7f43b6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.535253 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-kube-api-access-w868q" (OuterVolumeSpecName: "kube-api-access-w868q") pod "5c99f5b1-8566-4141-9bd4-71a75e7f43b6" (UID: "5c99f5b1-8566-4141-9bd4-71a75e7f43b6"). InnerVolumeSpecName "kube-api-access-w868q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.597290 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-config-data" (OuterVolumeSpecName: "config-data") pod "5c99f5b1-8566-4141-9bd4-71a75e7f43b6" (UID: "5c99f5b1-8566-4141-9bd4-71a75e7f43b6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.621106 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w868q\" (UniqueName: \"kubernetes.io/projected/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-kube-api-access-w868q\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.621139 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.621170 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") on node \"crc\" " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.621179 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.621193 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" " Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.621201 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.621209 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/5c99f5b1-8566-4141-9bd4-71a75e7f43b6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.635493 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage17-crc" (UniqueName: "kubernetes.io/local-volume/local-storage17-crc") on node "crc" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.646833 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage20-crc" (UniqueName: "kubernetes.io/local-volume/local-storage20-crc") on node "crc" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.657926 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"f0b77b88-19a5-4bdc-87a1-6a65273226a2","Type":"ContainerDied","Data":"58907fb108567ccc157e944740b878da74b00dd4afd8f71e705346251d50d030"} Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.657970 4751 scope.go:117] "RemoveContainer" containerID="d00daa5873d01a2a52917e99a159d4dd523630cf9771ab415ea43dc1ab2768ec" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.657968 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.661284 4751 generic.go:334] "Generic (PLEG): container finished" podID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" containerID="979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee" exitCode=0 Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.661308 4751 generic.go:334] "Generic (PLEG): container finished" podID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" containerID="b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12" exitCode=0 Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.661315 4751 generic.go:334] "Generic (PLEG): container finished" podID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" containerID="88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8" exitCode=143 Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.661334 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"5c99f5b1-8566-4141-9bd4-71a75e7f43b6","Type":"ContainerDied","Data":"979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee"} Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.661359 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"5c99f5b1-8566-4141-9bd4-71a75e7f43b6","Type":"ContainerDied","Data":"b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12"} Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.661368 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"5c99f5b1-8566-4141-9bd4-71a75e7f43b6","Type":"ContainerDied","Data":"88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8"} Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.661378 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"5c99f5b1-8566-4141-9bd4-71a75e7f43b6","Type":"ContainerDied","Data":"632343238b8b6273cfe0d462a1823f7261ef1f48b55a453cfe7a4028e8a3bc11"} Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.661608 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.684633 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.698648 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.704842 4751 scope.go:117] "RemoveContainer" containerID="e332c13695ee9418872977980d66d846c226e037834cc39d0c88b742a39fc6a9" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.717208 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.722033 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.722081 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.722927 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.730299 4751 scope.go:117] "RemoveContainer" containerID="cdf19d70be1f20e70e6253f8f6e27452c7be5e6f13392a31b245256551aa31c1" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.753638 4751 scope.go:117] "RemoveContainer" containerID="979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.769812 4751 scope.go:117] "RemoveContainer" containerID="b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.789727 4751 scope.go:117] "RemoveContainer" containerID="88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.813750 4751 scope.go:117] "RemoveContainer" containerID="979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee" Jan 31 15:02:30 crc kubenswrapper[4751]: E0131 15:02:30.814206 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee\": container with ID starting with 979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee not found: ID does not exist" containerID="979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.814229 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee"} err="failed to get container status \"979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee\": rpc error: code = NotFound desc = could not find container \"979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee\": container with ID starting with 979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee not found: ID does not exist" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.814251 4751 scope.go:117] "RemoveContainer" containerID="b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12" Jan 31 15:02:30 crc kubenswrapper[4751]: E0131 15:02:30.814701 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12\": container with ID starting with b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12 not found: ID does not exist" containerID="b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.814722 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12"} err="failed to get container status \"b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12\": rpc error: code = NotFound desc = could not find container \"b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12\": container with ID starting with b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12 not found: ID does not exist" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.814737 4751 scope.go:117] "RemoveContainer" containerID="88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8" Jan 31 15:02:30 crc kubenswrapper[4751]: E0131 15:02:30.814999 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8\": container with ID starting with 88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8 not found: ID does not exist" containerID="88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.815019 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8"} err="failed to get container status \"88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8\": rpc error: code = NotFound desc = could not find container \"88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8\": container with ID starting with 88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8 not found: ID does not exist" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.815033 4751 scope.go:117] "RemoveContainer" containerID="979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.815356 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee"} err="failed to get container status \"979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee\": rpc error: code = NotFound desc = could not find container \"979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee\": container with ID starting with 979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee not found: ID does not exist" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.815408 4751 scope.go:117] "RemoveContainer" containerID="b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.815674 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12"} err="failed to get container status \"b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12\": rpc error: code = NotFound desc = could not find container \"b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12\": container with ID starting with b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12 not found: ID does not exist" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.815693 4751 scope.go:117] "RemoveContainer" containerID="88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.815884 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8"} err="failed to get container status \"88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8\": rpc error: code = NotFound desc = could not find container \"88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8\": container with ID starting with 88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8 not found: ID does not exist" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.815902 4751 scope.go:117] "RemoveContainer" containerID="979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.816186 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee"} err="failed to get container status \"979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee\": rpc error: code = NotFound desc = could not find container \"979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee\": container with ID starting with 979026a33bcf12c3258686b10538e86fd757dcafb8921f3a74303169ea20efee not found: ID does not exist" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.816205 4751 scope.go:117] "RemoveContainer" containerID="b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.816529 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12"} err="failed to get container status \"b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12\": rpc error: code = NotFound desc = could not find container \"b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12\": container with ID starting with b116da01d3251d46a5037ade80e62e2ba05b68577c23f5fe7ea7d5c1c6939e12 not found: ID does not exist" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.816548 4751 scope.go:117] "RemoveContainer" containerID="88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8" Jan 31 15:02:30 crc kubenswrapper[4751]: I0131 15:02:30.816754 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8"} err="failed to get container status \"88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8\": rpc error: code = NotFound desc = could not find container \"88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8\": container with ID starting with 88dad209ae8e8d063a908076aa5adea9728c00caf34294e807966d88047f26f8 not found: ID does not exist" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.059738 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-mxvm7"] Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.065445 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-mxvm7"] Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.099596 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glancea977-account-delete-zhttj"] Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.099920 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.099940 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.099965 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.099972 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.099985 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.099992 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.100009 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100014 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.100025 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100032 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.100040 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95acd323-0a11-4e25-8439-f848c8811df5" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100049 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="95acd323-0a11-4e25-8439-f848c8811df5" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.100080 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100088 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.100099 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100105 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.100117 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100124 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.100136 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100144 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.100159 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95acd323-0a11-4e25-8439-f848c8811df5" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100166 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="95acd323-0a11-4e25-8439-f848c8811df5" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.100179 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100186 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.100197 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100206 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.100223 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100231 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.100243 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95acd323-0a11-4e25-8439-f848c8811df5" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100250 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="95acd323-0a11-4e25-8439-f848c8811df5" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.100263 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100270 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.100281 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100288 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: E0131 15:02:32.100297 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100302 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100422 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="95acd323-0a11-4e25-8439-f848c8811df5" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100432 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="95acd323-0a11-4e25-8439-f848c8811df5" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100441 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100448 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100455 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100462 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100472 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100480 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100490 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100499 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" containerName="glance-log" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100508 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ad25a0a-80c0-46fc-9eb7-c91e86c2d3ad" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100517 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="95acd323-0a11-4e25-8439-f848c8811df5" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100526 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100533 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100539 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" containerName="glance-httpd" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100546 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="320d0141-d27c-4f4d-9527-ae0f4db2f4fe" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100554 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6a459e47-85a7-4f4d-84ba-a7d3e01180dc" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.100561 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" containerName="glance-api" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.101134 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancea977-account-delete-zhttj" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.111289 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancea977-account-delete-zhttj"] Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.244799 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8h45f\" (UniqueName: \"kubernetes.io/projected/67f83dc8-ae5c-44bf-8760-91952693b0cb-kube-api-access-8h45f\") pod \"glancea977-account-delete-zhttj\" (UID: \"67f83dc8-ae5c-44bf-8760-91952693b0cb\") " pod="glance-kuttl-tests/glancea977-account-delete-zhttj" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.244883 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67f83dc8-ae5c-44bf-8760-91952693b0cb-operator-scripts\") pod \"glancea977-account-delete-zhttj\" (UID: \"67f83dc8-ae5c-44bf-8760-91952693b0cb\") " pod="glance-kuttl-tests/glancea977-account-delete-zhttj" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.346394 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8h45f\" (UniqueName: \"kubernetes.io/projected/67f83dc8-ae5c-44bf-8760-91952693b0cb-kube-api-access-8h45f\") pod \"glancea977-account-delete-zhttj\" (UID: \"67f83dc8-ae5c-44bf-8760-91952693b0cb\") " pod="glance-kuttl-tests/glancea977-account-delete-zhttj" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.346481 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67f83dc8-ae5c-44bf-8760-91952693b0cb-operator-scripts\") pod \"glancea977-account-delete-zhttj\" (UID: \"67f83dc8-ae5c-44bf-8760-91952693b0cb\") " pod="glance-kuttl-tests/glancea977-account-delete-zhttj" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.347225 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67f83dc8-ae5c-44bf-8760-91952693b0cb-operator-scripts\") pod \"glancea977-account-delete-zhttj\" (UID: \"67f83dc8-ae5c-44bf-8760-91952693b0cb\") " pod="glance-kuttl-tests/glancea977-account-delete-zhttj" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.369043 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8h45f\" (UniqueName: \"kubernetes.io/projected/67f83dc8-ae5c-44bf-8760-91952693b0cb-kube-api-access-8h45f\") pod \"glancea977-account-delete-zhttj\" (UID: \"67f83dc8-ae5c-44bf-8760-91952693b0cb\") " pod="glance-kuttl-tests/glancea977-account-delete-zhttj" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.415797 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancea977-account-delete-zhttj" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.415935 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5c99f5b1-8566-4141-9bd4-71a75e7f43b6" path="/var/lib/kubelet/pods/5c99f5b1-8566-4141-9bd4-71a75e7f43b6/volumes" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.416831 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dbf741e4-9445-4080-84f2-601e270f7aa0" path="/var/lib/kubelet/pods/dbf741e4-9445-4080-84f2-601e270f7aa0/volumes" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.417815 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0b77b88-19a5-4bdc-87a1-6a65273226a2" path="/var/lib/kubelet/pods/f0b77b88-19a5-4bdc-87a1-6a65273226a2/volumes" Jan 31 15:02:32 crc kubenswrapper[4751]: I0131 15:02:32.859709 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancea977-account-delete-zhttj"] Jan 31 15:02:33 crc kubenswrapper[4751]: I0131 15:02:33.690474 4751 generic.go:334] "Generic (PLEG): container finished" podID="67f83dc8-ae5c-44bf-8760-91952693b0cb" containerID="748bb24fed6fe40319dbeeaf8bdfc4e48c0cf8e80d0e06626f9b2a7dd29a8843" exitCode=0 Jan 31 15:02:33 crc kubenswrapper[4751]: I0131 15:02:33.690576 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancea977-account-delete-zhttj" event={"ID":"67f83dc8-ae5c-44bf-8760-91952693b0cb","Type":"ContainerDied","Data":"748bb24fed6fe40319dbeeaf8bdfc4e48c0cf8e80d0e06626f9b2a7dd29a8843"} Jan 31 15:02:33 crc kubenswrapper[4751]: I0131 15:02:33.690807 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancea977-account-delete-zhttj" event={"ID":"67f83dc8-ae5c-44bf-8760-91952693b0cb","Type":"ContainerStarted","Data":"98436c2d7cbd664ed0c8b67784e8453bdc23efe3519e54a49c94c675f566dd23"} Jan 31 15:02:34 crc kubenswrapper[4751]: I0131 15:02:34.971876 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancea977-account-delete-zhttj" Jan 31 15:02:35 crc kubenswrapper[4751]: I0131 15:02:35.082658 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67f83dc8-ae5c-44bf-8760-91952693b0cb-operator-scripts\") pod \"67f83dc8-ae5c-44bf-8760-91952693b0cb\" (UID: \"67f83dc8-ae5c-44bf-8760-91952693b0cb\") " Jan 31 15:02:35 crc kubenswrapper[4751]: I0131 15:02:35.082727 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8h45f\" (UniqueName: \"kubernetes.io/projected/67f83dc8-ae5c-44bf-8760-91952693b0cb-kube-api-access-8h45f\") pod \"67f83dc8-ae5c-44bf-8760-91952693b0cb\" (UID: \"67f83dc8-ae5c-44bf-8760-91952693b0cb\") " Jan 31 15:02:35 crc kubenswrapper[4751]: I0131 15:02:35.083739 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/67f83dc8-ae5c-44bf-8760-91952693b0cb-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "67f83dc8-ae5c-44bf-8760-91952693b0cb" (UID: "67f83dc8-ae5c-44bf-8760-91952693b0cb"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:02:35 crc kubenswrapper[4751]: I0131 15:02:35.089083 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/67f83dc8-ae5c-44bf-8760-91952693b0cb-kube-api-access-8h45f" (OuterVolumeSpecName: "kube-api-access-8h45f") pod "67f83dc8-ae5c-44bf-8760-91952693b0cb" (UID: "67f83dc8-ae5c-44bf-8760-91952693b0cb"). InnerVolumeSpecName "kube-api-access-8h45f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:35 crc kubenswrapper[4751]: I0131 15:02:35.184134 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/67f83dc8-ae5c-44bf-8760-91952693b0cb-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:35 crc kubenswrapper[4751]: I0131 15:02:35.184167 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8h45f\" (UniqueName: \"kubernetes.io/projected/67f83dc8-ae5c-44bf-8760-91952693b0cb-kube-api-access-8h45f\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:35 crc kubenswrapper[4751]: I0131 15:02:35.711230 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancea977-account-delete-zhttj" event={"ID":"67f83dc8-ae5c-44bf-8760-91952693b0cb","Type":"ContainerDied","Data":"98436c2d7cbd664ed0c8b67784e8453bdc23efe3519e54a49c94c675f566dd23"} Jan 31 15:02:35 crc kubenswrapper[4751]: I0131 15:02:35.711602 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98436c2d7cbd664ed0c8b67784e8453bdc23efe3519e54a49c94c675f566dd23" Jan 31 15:02:35 crc kubenswrapper[4751]: I0131 15:02:35.711279 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancea977-account-delete-zhttj" Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.125671 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-mcgm2"] Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.131583 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-mcgm2"] Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.140381 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glancea977-account-delete-zhttj"] Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.145700 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-a977-account-create-update-tlstz"] Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.150277 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-a977-account-create-update-tlstz"] Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.155012 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glancea977-account-delete-zhttj"] Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.746488 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-b924d"] Jan 31 15:02:37 crc kubenswrapper[4751]: E0131 15:02:37.747341 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="67f83dc8-ae5c-44bf-8760-91952693b0cb" containerName="mariadb-account-delete" Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.747360 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="67f83dc8-ae5c-44bf-8760-91952693b0cb" containerName="mariadb-account-delete" Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.748994 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="67f83dc8-ae5c-44bf-8760-91952693b0cb" containerName="mariadb-account-delete" Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.751446 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-b924d" Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.758822 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-5797-account-create-update-hfdl2"] Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.759903 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-5797-account-create-update-hfdl2" Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.765724 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-5797-account-create-update-hfdl2"] Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.770604 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.773200 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-b924d"] Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.823561 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frh95\" (UniqueName: \"kubernetes.io/projected/58c33299-57ac-4fc9-9751-b521d31e60cf-kube-api-access-frh95\") pod \"glance-db-create-b924d\" (UID: \"58c33299-57ac-4fc9-9751-b521d31e60cf\") " pod="glance-kuttl-tests/glance-db-create-b924d" Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.823701 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58c33299-57ac-4fc9-9751-b521d31e60cf-operator-scripts\") pod \"glance-db-create-b924d\" (UID: \"58c33299-57ac-4fc9-9751-b521d31e60cf\") " pod="glance-kuttl-tests/glance-db-create-b924d" Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.924415 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58c33299-57ac-4fc9-9751-b521d31e60cf-operator-scripts\") pod \"glance-db-create-b924d\" (UID: \"58c33299-57ac-4fc9-9751-b521d31e60cf\") " pod="glance-kuttl-tests/glance-db-create-b924d" Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.924479 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/896f2e37-3440-46e7-81ed-2805ab336470-operator-scripts\") pod \"glance-5797-account-create-update-hfdl2\" (UID: \"896f2e37-3440-46e7-81ed-2805ab336470\") " pod="glance-kuttl-tests/glance-5797-account-create-update-hfdl2" Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.924504 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frh95\" (UniqueName: \"kubernetes.io/projected/58c33299-57ac-4fc9-9751-b521d31e60cf-kube-api-access-frh95\") pod \"glance-db-create-b924d\" (UID: \"58c33299-57ac-4fc9-9751-b521d31e60cf\") " pod="glance-kuttl-tests/glance-db-create-b924d" Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.924522 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8hsvw\" (UniqueName: \"kubernetes.io/projected/896f2e37-3440-46e7-81ed-2805ab336470-kube-api-access-8hsvw\") pod \"glance-5797-account-create-update-hfdl2\" (UID: \"896f2e37-3440-46e7-81ed-2805ab336470\") " pod="glance-kuttl-tests/glance-5797-account-create-update-hfdl2" Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.925166 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58c33299-57ac-4fc9-9751-b521d31e60cf-operator-scripts\") pod \"glance-db-create-b924d\" (UID: \"58c33299-57ac-4fc9-9751-b521d31e60cf\") " pod="glance-kuttl-tests/glance-db-create-b924d" Jan 31 15:02:37 crc kubenswrapper[4751]: I0131 15:02:37.952089 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frh95\" (UniqueName: \"kubernetes.io/projected/58c33299-57ac-4fc9-9751-b521d31e60cf-kube-api-access-frh95\") pod \"glance-db-create-b924d\" (UID: \"58c33299-57ac-4fc9-9751-b521d31e60cf\") " pod="glance-kuttl-tests/glance-db-create-b924d" Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.025662 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8hsvw\" (UniqueName: \"kubernetes.io/projected/896f2e37-3440-46e7-81ed-2805ab336470-kube-api-access-8hsvw\") pod \"glance-5797-account-create-update-hfdl2\" (UID: \"896f2e37-3440-46e7-81ed-2805ab336470\") " pod="glance-kuttl-tests/glance-5797-account-create-update-hfdl2" Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.025833 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/896f2e37-3440-46e7-81ed-2805ab336470-operator-scripts\") pod \"glance-5797-account-create-update-hfdl2\" (UID: \"896f2e37-3440-46e7-81ed-2805ab336470\") " pod="glance-kuttl-tests/glance-5797-account-create-update-hfdl2" Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.026657 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/896f2e37-3440-46e7-81ed-2805ab336470-operator-scripts\") pod \"glance-5797-account-create-update-hfdl2\" (UID: \"896f2e37-3440-46e7-81ed-2805ab336470\") " pod="glance-kuttl-tests/glance-5797-account-create-update-hfdl2" Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.042912 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8hsvw\" (UniqueName: \"kubernetes.io/projected/896f2e37-3440-46e7-81ed-2805ab336470-kube-api-access-8hsvw\") pod \"glance-5797-account-create-update-hfdl2\" (UID: \"896f2e37-3440-46e7-81ed-2805ab336470\") " pod="glance-kuttl-tests/glance-5797-account-create-update-hfdl2" Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.077006 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-b924d" Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.085191 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-5797-account-create-update-hfdl2" Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.414447 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="67f83dc8-ae5c-44bf-8760-91952693b0cb" path="/var/lib/kubelet/pods/67f83dc8-ae5c-44bf-8760-91952693b0cb/volumes" Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.415383 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9e826f0-62a4-4a7c-8945-0c29cd34e667" path="/var/lib/kubelet/pods/d9e826f0-62a4-4a7c-8945-0c29cd34e667/volumes" Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.415839 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e9730563-64d8-44a2-9d93-7fe5fcd4c8d4" path="/var/lib/kubelet/pods/e9730563-64d8-44a2-9d93-7fe5fcd4c8d4/volumes" Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.564755 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-5797-account-create-update-hfdl2"] Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.612792 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-b924d"] Jan 31 15:02:38 crc kubenswrapper[4751]: W0131 15:02:38.620434 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod58c33299_57ac_4fc9_9751_b521d31e60cf.slice/crio-c586edc1f2bb3b69231b90966abffedc546d02a97477bc07a17109fd14a87d7e WatchSource:0}: Error finding container c586edc1f2bb3b69231b90966abffedc546d02a97477bc07a17109fd14a87d7e: Status 404 returned error can't find the container with id c586edc1f2bb3b69231b90966abffedc546d02a97477bc07a17109fd14a87d7e Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.746514 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-5797-account-create-update-hfdl2" event={"ID":"896f2e37-3440-46e7-81ed-2805ab336470","Type":"ContainerStarted","Data":"79a10f8ac34beb7889999938f10a1f8fd98e243cac870f30f1dc184e88a0e786"} Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.747010 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-5797-account-create-update-hfdl2" event={"ID":"896f2e37-3440-46e7-81ed-2805ab336470","Type":"ContainerStarted","Data":"7ec3d74c87884c401f6544a44bd9fa3d26f9394122c1ea5e7d900c5c386bfef1"} Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.748524 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-b924d" event={"ID":"58c33299-57ac-4fc9-9751-b521d31e60cf","Type":"ContainerStarted","Data":"f05a5057693bfdfb7d9c10870add4a18c1b97e05d99f428268b3b93785058feb"} Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.748563 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-b924d" event={"ID":"58c33299-57ac-4fc9-9751-b521d31e60cf","Type":"ContainerStarted","Data":"c586edc1f2bb3b69231b90966abffedc546d02a97477bc07a17109fd14a87d7e"} Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.766311 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-5797-account-create-update-hfdl2" podStartSLOduration=1.7662908800000001 podStartE2EDuration="1.76629088s" podCreationTimestamp="2026-01-31 15:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:02:38.759594553 +0000 UTC m=+1263.134307438" watchObservedRunningTime="2026-01-31 15:02:38.76629088 +0000 UTC m=+1263.141003755" Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.779370 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-create-b924d" podStartSLOduration=1.7793534439999998 podStartE2EDuration="1.779353444s" podCreationTimestamp="2026-01-31 15:02:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:02:38.774896317 +0000 UTC m=+1263.149609222" watchObservedRunningTime="2026-01-31 15:02:38.779353444 +0000 UTC m=+1263.154066329" Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.896431 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.896504 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.896560 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.897286 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"7fcf941f127d31d0e5c99d5ef038c633782d289d0e911f4e9c5c6f77b2a91e2a"} pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:02:38 crc kubenswrapper[4751]: I0131 15:02:38.897361 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" containerID="cri-o://7fcf941f127d31d0e5c99d5ef038c633782d289d0e911f4e9c5c6f77b2a91e2a" gracePeriod=600 Jan 31 15:02:39 crc kubenswrapper[4751]: I0131 15:02:39.760635 4751 generic.go:334] "Generic (PLEG): container finished" podID="58c33299-57ac-4fc9-9751-b521d31e60cf" containerID="f05a5057693bfdfb7d9c10870add4a18c1b97e05d99f428268b3b93785058feb" exitCode=0 Jan 31 15:02:39 crc kubenswrapper[4751]: I0131 15:02:39.760724 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-b924d" event={"ID":"58c33299-57ac-4fc9-9751-b521d31e60cf","Type":"ContainerDied","Data":"f05a5057693bfdfb7d9c10870add4a18c1b97e05d99f428268b3b93785058feb"} Jan 31 15:02:39 crc kubenswrapper[4751]: I0131 15:02:39.763337 4751 generic.go:334] "Generic (PLEG): container finished" podID="896f2e37-3440-46e7-81ed-2805ab336470" containerID="79a10f8ac34beb7889999938f10a1f8fd98e243cac870f30f1dc184e88a0e786" exitCode=0 Jan 31 15:02:39 crc kubenswrapper[4751]: I0131 15:02:39.763384 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-5797-account-create-update-hfdl2" event={"ID":"896f2e37-3440-46e7-81ed-2805ab336470","Type":"ContainerDied","Data":"79a10f8ac34beb7889999938f10a1f8fd98e243cac870f30f1dc184e88a0e786"} Jan 31 15:02:39 crc kubenswrapper[4751]: I0131 15:02:39.767211 4751 generic.go:334] "Generic (PLEG): container finished" podID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerID="7fcf941f127d31d0e5c99d5ef038c633782d289d0e911f4e9c5c6f77b2a91e2a" exitCode=0 Jan 31 15:02:39 crc kubenswrapper[4751]: I0131 15:02:39.767280 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerDied","Data":"7fcf941f127d31d0e5c99d5ef038c633782d289d0e911f4e9c5c6f77b2a91e2a"} Jan 31 15:02:39 crc kubenswrapper[4751]: I0131 15:02:39.767321 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerStarted","Data":"89a88ddaeae8a6fe7859be79e45bc66e157a0d02a03f5daf69e0ab6320bd15be"} Jan 31 15:02:39 crc kubenswrapper[4751]: I0131 15:02:39.767349 4751 scope.go:117] "RemoveContainer" containerID="dc064826cd8a78005216541d25736856cc2dd920bfe44778b79dbfd2f76ed341" Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.095582 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-5797-account-create-update-hfdl2" Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.102281 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-b924d" Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.171260 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/896f2e37-3440-46e7-81ed-2805ab336470-operator-scripts\") pod \"896f2e37-3440-46e7-81ed-2805ab336470\" (UID: \"896f2e37-3440-46e7-81ed-2805ab336470\") " Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.171416 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8hsvw\" (UniqueName: \"kubernetes.io/projected/896f2e37-3440-46e7-81ed-2805ab336470-kube-api-access-8hsvw\") pod \"896f2e37-3440-46e7-81ed-2805ab336470\" (UID: \"896f2e37-3440-46e7-81ed-2805ab336470\") " Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.172172 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/896f2e37-3440-46e7-81ed-2805ab336470-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "896f2e37-3440-46e7-81ed-2805ab336470" (UID: "896f2e37-3440-46e7-81ed-2805ab336470"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.177778 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/896f2e37-3440-46e7-81ed-2805ab336470-kube-api-access-8hsvw" (OuterVolumeSpecName: "kube-api-access-8hsvw") pod "896f2e37-3440-46e7-81ed-2805ab336470" (UID: "896f2e37-3440-46e7-81ed-2805ab336470"). InnerVolumeSpecName "kube-api-access-8hsvw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.272655 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58c33299-57ac-4fc9-9751-b521d31e60cf-operator-scripts\") pod \"58c33299-57ac-4fc9-9751-b521d31e60cf\" (UID: \"58c33299-57ac-4fc9-9751-b521d31e60cf\") " Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.272744 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-frh95\" (UniqueName: \"kubernetes.io/projected/58c33299-57ac-4fc9-9751-b521d31e60cf-kube-api-access-frh95\") pod \"58c33299-57ac-4fc9-9751-b521d31e60cf\" (UID: \"58c33299-57ac-4fc9-9751-b521d31e60cf\") " Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.273024 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/896f2e37-3440-46e7-81ed-2805ab336470-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.273037 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8hsvw\" (UniqueName: \"kubernetes.io/projected/896f2e37-3440-46e7-81ed-2805ab336470-kube-api-access-8hsvw\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.273473 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/58c33299-57ac-4fc9-9751-b521d31e60cf-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "58c33299-57ac-4fc9-9751-b521d31e60cf" (UID: "58c33299-57ac-4fc9-9751-b521d31e60cf"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.279582 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/58c33299-57ac-4fc9-9751-b521d31e60cf-kube-api-access-frh95" (OuterVolumeSpecName: "kube-api-access-frh95") pod "58c33299-57ac-4fc9-9751-b521d31e60cf" (UID: "58c33299-57ac-4fc9-9751-b521d31e60cf"). InnerVolumeSpecName "kube-api-access-frh95". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.374410 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/58c33299-57ac-4fc9-9751-b521d31e60cf-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.374461 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-frh95\" (UniqueName: \"kubernetes.io/projected/58c33299-57ac-4fc9-9751-b521d31e60cf-kube-api-access-frh95\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.784405 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-b924d" Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.784589 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-b924d" event={"ID":"58c33299-57ac-4fc9-9751-b521d31e60cf","Type":"ContainerDied","Data":"c586edc1f2bb3b69231b90966abffedc546d02a97477bc07a17109fd14a87d7e"} Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.784675 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c586edc1f2bb3b69231b90966abffedc546d02a97477bc07a17109fd14a87d7e" Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.785985 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-5797-account-create-update-hfdl2" event={"ID":"896f2e37-3440-46e7-81ed-2805ab336470","Type":"ContainerDied","Data":"7ec3d74c87884c401f6544a44bd9fa3d26f9394122c1ea5e7d900c5c386bfef1"} Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.786038 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7ec3d74c87884c401f6544a44bd9fa3d26f9394122c1ea5e7d900c5c386bfef1" Jan 31 15:02:41 crc kubenswrapper[4751]: I0131 15:02:41.786006 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-5797-account-create-update-hfdl2" Jan 31 15:02:42 crc kubenswrapper[4751]: I0131 15:02:42.854560 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-pn752"] Jan 31 15:02:42 crc kubenswrapper[4751]: E0131 15:02:42.855101 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="896f2e37-3440-46e7-81ed-2805ab336470" containerName="mariadb-account-create-update" Jan 31 15:02:42 crc kubenswrapper[4751]: I0131 15:02:42.855116 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="896f2e37-3440-46e7-81ed-2805ab336470" containerName="mariadb-account-create-update" Jan 31 15:02:42 crc kubenswrapper[4751]: E0131 15:02:42.855148 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="58c33299-57ac-4fc9-9751-b521d31e60cf" containerName="mariadb-database-create" Jan 31 15:02:42 crc kubenswrapper[4751]: I0131 15:02:42.855156 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="58c33299-57ac-4fc9-9751-b521d31e60cf" containerName="mariadb-database-create" Jan 31 15:02:42 crc kubenswrapper[4751]: I0131 15:02:42.855304 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="58c33299-57ac-4fc9-9751-b521d31e60cf" containerName="mariadb-database-create" Jan 31 15:02:42 crc kubenswrapper[4751]: I0131 15:02:42.855326 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="896f2e37-3440-46e7-81ed-2805ab336470" containerName="mariadb-account-create-update" Jan 31 15:02:42 crc kubenswrapper[4751]: I0131 15:02:42.855721 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-pn752" Jan 31 15:02:42 crc kubenswrapper[4751]: I0131 15:02:42.858896 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Jan 31 15:02:42 crc kubenswrapper[4751]: I0131 15:02:42.859201 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-hltqc" Jan 31 15:02:42 crc kubenswrapper[4751]: I0131 15:02:42.867181 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-pn752"] Jan 31 15:02:42 crc kubenswrapper[4751]: I0131 15:02:42.997970 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8g2hb\" (UniqueName: \"kubernetes.io/projected/d350f693-ea74-48d5-a7a7-3fa3264174ca-kube-api-access-8g2hb\") pod \"glance-db-sync-pn752\" (UID: \"d350f693-ea74-48d5-a7a7-3fa3264174ca\") " pod="glance-kuttl-tests/glance-db-sync-pn752" Jan 31 15:02:42 crc kubenswrapper[4751]: I0131 15:02:42.998061 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d350f693-ea74-48d5-a7a7-3fa3264174ca-db-sync-config-data\") pod \"glance-db-sync-pn752\" (UID: \"d350f693-ea74-48d5-a7a7-3fa3264174ca\") " pod="glance-kuttl-tests/glance-db-sync-pn752" Jan 31 15:02:42 crc kubenswrapper[4751]: I0131 15:02:42.998124 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d350f693-ea74-48d5-a7a7-3fa3264174ca-config-data\") pod \"glance-db-sync-pn752\" (UID: \"d350f693-ea74-48d5-a7a7-3fa3264174ca\") " pod="glance-kuttl-tests/glance-db-sync-pn752" Jan 31 15:02:43 crc kubenswrapper[4751]: I0131 15:02:43.099521 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d350f693-ea74-48d5-a7a7-3fa3264174ca-config-data\") pod \"glance-db-sync-pn752\" (UID: \"d350f693-ea74-48d5-a7a7-3fa3264174ca\") " pod="glance-kuttl-tests/glance-db-sync-pn752" Jan 31 15:02:43 crc kubenswrapper[4751]: I0131 15:02:43.099635 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8g2hb\" (UniqueName: \"kubernetes.io/projected/d350f693-ea74-48d5-a7a7-3fa3264174ca-kube-api-access-8g2hb\") pod \"glance-db-sync-pn752\" (UID: \"d350f693-ea74-48d5-a7a7-3fa3264174ca\") " pod="glance-kuttl-tests/glance-db-sync-pn752" Jan 31 15:02:43 crc kubenswrapper[4751]: I0131 15:02:43.099713 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d350f693-ea74-48d5-a7a7-3fa3264174ca-db-sync-config-data\") pod \"glance-db-sync-pn752\" (UID: \"d350f693-ea74-48d5-a7a7-3fa3264174ca\") " pod="glance-kuttl-tests/glance-db-sync-pn752" Jan 31 15:02:43 crc kubenswrapper[4751]: I0131 15:02:43.111982 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d350f693-ea74-48d5-a7a7-3fa3264174ca-db-sync-config-data\") pod \"glance-db-sync-pn752\" (UID: \"d350f693-ea74-48d5-a7a7-3fa3264174ca\") " pod="glance-kuttl-tests/glance-db-sync-pn752" Jan 31 15:02:43 crc kubenswrapper[4751]: I0131 15:02:43.112013 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d350f693-ea74-48d5-a7a7-3fa3264174ca-config-data\") pod \"glance-db-sync-pn752\" (UID: \"d350f693-ea74-48d5-a7a7-3fa3264174ca\") " pod="glance-kuttl-tests/glance-db-sync-pn752" Jan 31 15:02:43 crc kubenswrapper[4751]: I0131 15:02:43.117723 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8g2hb\" (UniqueName: \"kubernetes.io/projected/d350f693-ea74-48d5-a7a7-3fa3264174ca-kube-api-access-8g2hb\") pod \"glance-db-sync-pn752\" (UID: \"d350f693-ea74-48d5-a7a7-3fa3264174ca\") " pod="glance-kuttl-tests/glance-db-sync-pn752" Jan 31 15:02:43 crc kubenswrapper[4751]: I0131 15:02:43.171591 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-pn752" Jan 31 15:02:43 crc kubenswrapper[4751]: I0131 15:02:43.582003 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-pn752"] Jan 31 15:02:43 crc kubenswrapper[4751]: W0131 15:02:43.585713 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd350f693_ea74_48d5_a7a7_3fa3264174ca.slice/crio-6009e8294d091b1eb1aa19dd6fb100b42cfbbfaf924c7ebad1269a56b8969294 WatchSource:0}: Error finding container 6009e8294d091b1eb1aa19dd6fb100b42cfbbfaf924c7ebad1269a56b8969294: Status 404 returned error can't find the container with id 6009e8294d091b1eb1aa19dd6fb100b42cfbbfaf924c7ebad1269a56b8969294 Jan 31 15:02:43 crc kubenswrapper[4751]: I0131 15:02:43.801523 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-pn752" event={"ID":"d350f693-ea74-48d5-a7a7-3fa3264174ca","Type":"ContainerStarted","Data":"6009e8294d091b1eb1aa19dd6fb100b42cfbbfaf924c7ebad1269a56b8969294"} Jan 31 15:02:44 crc kubenswrapper[4751]: I0131 15:02:44.810024 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-pn752" event={"ID":"d350f693-ea74-48d5-a7a7-3fa3264174ca","Type":"ContainerStarted","Data":"0e1d80ca3a8421336cb1b11f5bd0a2d183f47c5e60dedbf720f6c08836e3d291"} Jan 31 15:02:44 crc kubenswrapper[4751]: I0131 15:02:44.841810 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-pn752" podStartSLOduration=2.841791062 podStartE2EDuration="2.841791062s" podCreationTimestamp="2026-01-31 15:02:42 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:02:44.839773838 +0000 UTC m=+1269.214486723" watchObservedRunningTime="2026-01-31 15:02:44.841791062 +0000 UTC m=+1269.216503967" Jan 31 15:02:46 crc kubenswrapper[4751]: I0131 15:02:46.825331 4751 generic.go:334] "Generic (PLEG): container finished" podID="d350f693-ea74-48d5-a7a7-3fa3264174ca" containerID="0e1d80ca3a8421336cb1b11f5bd0a2d183f47c5e60dedbf720f6c08836e3d291" exitCode=0 Jan 31 15:02:46 crc kubenswrapper[4751]: I0131 15:02:46.825437 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-pn752" event={"ID":"d350f693-ea74-48d5-a7a7-3fa3264174ca","Type":"ContainerDied","Data":"0e1d80ca3a8421336cb1b11f5bd0a2d183f47c5e60dedbf720f6c08836e3d291"} Jan 31 15:02:48 crc kubenswrapper[4751]: I0131 15:02:48.158274 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-pn752" Jan 31 15:02:48 crc kubenswrapper[4751]: I0131 15:02:48.271008 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8g2hb\" (UniqueName: \"kubernetes.io/projected/d350f693-ea74-48d5-a7a7-3fa3264174ca-kube-api-access-8g2hb\") pod \"d350f693-ea74-48d5-a7a7-3fa3264174ca\" (UID: \"d350f693-ea74-48d5-a7a7-3fa3264174ca\") " Jan 31 15:02:48 crc kubenswrapper[4751]: I0131 15:02:48.271255 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d350f693-ea74-48d5-a7a7-3fa3264174ca-db-sync-config-data\") pod \"d350f693-ea74-48d5-a7a7-3fa3264174ca\" (UID: \"d350f693-ea74-48d5-a7a7-3fa3264174ca\") " Jan 31 15:02:48 crc kubenswrapper[4751]: I0131 15:02:48.271282 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d350f693-ea74-48d5-a7a7-3fa3264174ca-config-data\") pod \"d350f693-ea74-48d5-a7a7-3fa3264174ca\" (UID: \"d350f693-ea74-48d5-a7a7-3fa3264174ca\") " Jan 31 15:02:48 crc kubenswrapper[4751]: I0131 15:02:48.276892 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d350f693-ea74-48d5-a7a7-3fa3264174ca-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "d350f693-ea74-48d5-a7a7-3fa3264174ca" (UID: "d350f693-ea74-48d5-a7a7-3fa3264174ca"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:48 crc kubenswrapper[4751]: I0131 15:02:48.278878 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d350f693-ea74-48d5-a7a7-3fa3264174ca-kube-api-access-8g2hb" (OuterVolumeSpecName: "kube-api-access-8g2hb") pod "d350f693-ea74-48d5-a7a7-3fa3264174ca" (UID: "d350f693-ea74-48d5-a7a7-3fa3264174ca"). InnerVolumeSpecName "kube-api-access-8g2hb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:48 crc kubenswrapper[4751]: I0131 15:02:48.309213 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d350f693-ea74-48d5-a7a7-3fa3264174ca-config-data" (OuterVolumeSpecName: "config-data") pod "d350f693-ea74-48d5-a7a7-3fa3264174ca" (UID: "d350f693-ea74-48d5-a7a7-3fa3264174ca"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:48 crc kubenswrapper[4751]: I0131 15:02:48.373479 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d350f693-ea74-48d5-a7a7-3fa3264174ca-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:48 crc kubenswrapper[4751]: I0131 15:02:48.373515 4751 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/d350f693-ea74-48d5-a7a7-3fa3264174ca-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:48 crc kubenswrapper[4751]: I0131 15:02:48.373527 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8g2hb\" (UniqueName: \"kubernetes.io/projected/d350f693-ea74-48d5-a7a7-3fa3264174ca-kube-api-access-8g2hb\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:48 crc kubenswrapper[4751]: I0131 15:02:48.839624 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-pn752" event={"ID":"d350f693-ea74-48d5-a7a7-3fa3264174ca","Type":"ContainerDied","Data":"6009e8294d091b1eb1aa19dd6fb100b42cfbbfaf924c7ebad1269a56b8969294"} Jan 31 15:02:48 crc kubenswrapper[4751]: I0131 15:02:48.839662 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6009e8294d091b1eb1aa19dd6fb100b42cfbbfaf924c7ebad1269a56b8969294" Jan 31 15:02:48 crc kubenswrapper[4751]: I0131 15:02:48.839695 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-pn752" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.144856 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:02:50 crc kubenswrapper[4751]: E0131 15:02:50.145544 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d350f693-ea74-48d5-a7a7-3fa3264174ca" containerName="glance-db-sync" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.145562 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d350f693-ea74-48d5-a7a7-3fa3264174ca" containerName="glance-db-sync" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.145705 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d350f693-ea74-48d5-a7a7-3fa3264174ca" containerName="glance-db-sync" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.146408 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.148018 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-external-config-data" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.148143 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.154114 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-hltqc" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.160009 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.297125 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d5c53d-eea5-4866-983f-8477eb16177b-config-data\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.297183 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5d5c53d-eea5-4866-983f-8477eb16177b-logs\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.297385 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-dev\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.297456 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.297528 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.297555 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-sys\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.297580 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5d5c53d-eea5-4866-983f-8477eb16177b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.297689 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.297727 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5d5c53d-eea5-4866-983f-8477eb16177b-scripts\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.297787 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.297812 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-run\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.297878 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-llcmb\" (UniqueName: \"kubernetes.io/projected/a5d5c53d-eea5-4866-983f-8477eb16177b-kube-api-access-llcmb\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.297905 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.297934 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.398974 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399021 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5d5c53d-eea5-4866-983f-8477eb16177b-scripts\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399050 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399080 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-run\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399108 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-llcmb\" (UniqueName: \"kubernetes.io/projected/a5d5c53d-eea5-4866-983f-8477eb16177b-kube-api-access-llcmb\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399126 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399151 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399182 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d5c53d-eea5-4866-983f-8477eb16177b-config-data\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399203 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5d5c53d-eea5-4866-983f-8477eb16177b-logs\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399237 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-dev\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399243 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-run\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399257 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399361 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399383 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-sys\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399408 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5d5c53d-eea5-4866-983f-8477eb16177b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399433 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399435 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399472 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-dev\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399353 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399581 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-sys\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399638 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399789 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") device mount path \"/mnt/openstack/pv02\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.399972 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5d5c53d-eea5-4866-983f-8477eb16177b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.400004 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5d5c53d-eea5-4866-983f-8477eb16177b-logs\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.400101 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.408924 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d5c53d-eea5-4866-983f-8477eb16177b-config-data\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.409379 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5d5c53d-eea5-4866-983f-8477eb16177b-scripts\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.419705 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-llcmb\" (UniqueName: \"kubernetes.io/projected/a5d5c53d-eea5-4866-983f-8477eb16177b-kube-api-access-llcmb\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.424151 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.431509 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.463053 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.587872 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.589010 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.591527 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.602637 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.703769 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.703844 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.703869 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00ce2535-6386-444d-8bbd-abded7935ebf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.703919 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-run\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.703984 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.704001 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00ce2535-6386-444d-8bbd-abded7935ebf-logs\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.704037 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vwdrl\" (UniqueName: \"kubernetes.io/projected/00ce2535-6386-444d-8bbd-abded7935ebf-kube-api-access-vwdrl\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.704062 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00ce2535-6386-444d-8bbd-abded7935ebf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.704097 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00ce2535-6386-444d-8bbd-abded7935ebf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.704125 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.704139 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-dev\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.704164 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.704201 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.704227 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-sys\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805317 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805375 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00ce2535-6386-444d-8bbd-abded7935ebf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805410 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-run\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805447 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805465 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00ce2535-6386-444d-8bbd-abded7935ebf-logs\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805488 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805514 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vwdrl\" (UniqueName: \"kubernetes.io/projected/00ce2535-6386-444d-8bbd-abded7935ebf-kube-api-access-vwdrl\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805502 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-run\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805591 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00ce2535-6386-444d-8bbd-abded7935ebf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805641 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00ce2535-6386-444d-8bbd-abded7935ebf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805722 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805751 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-dev\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805818 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805837 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-dev\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805854 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805843 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805850 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805879 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805962 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-sys\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.806002 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.806013 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-sys\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.805973 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") device mount path \"/mnt/openstack/pv17\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.806032 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00ce2535-6386-444d-8bbd-abded7935ebf-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.806114 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00ce2535-6386-444d-8bbd-abded7935ebf-logs\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.806154 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.812777 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00ce2535-6386-444d-8bbd-abded7935ebf-scripts\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.813246 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00ce2535-6386-444d-8bbd-abded7935ebf-config-data\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.822062 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vwdrl\" (UniqueName: \"kubernetes.io/projected/00ce2535-6386-444d-8bbd-abded7935ebf-kube-api-access-vwdrl\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.829347 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.832592 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.907054 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:50 crc kubenswrapper[4751]: I0131 15:02:50.909045 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:02:51 crc kubenswrapper[4751]: I0131 15:02:51.117274 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:51 crc kubenswrapper[4751]: I0131 15:02:51.339859 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:51 crc kubenswrapper[4751]: I0131 15:02:51.879806 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"a5d5c53d-eea5-4866-983f-8477eb16177b","Type":"ContainerStarted","Data":"46e6d1ac28c6a7caa669a8786fb2699fde254ad44262a93b0f5099c64c3baee9"} Jan 31 15:02:51 crc kubenswrapper[4751]: I0131 15:02:51.880330 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"a5d5c53d-eea5-4866-983f-8477eb16177b","Type":"ContainerStarted","Data":"1130e67347fe51af0e3a73480eea09807ed13c49b680ae612acbf9d7812bf42d"} Jan 31 15:02:51 crc kubenswrapper[4751]: I0131 15:02:51.880347 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"a5d5c53d-eea5-4866-983f-8477eb16177b","Type":"ContainerStarted","Data":"f196d65739ae0a450d1f988eb2e240599d07c3bc3006a4085be84398709e7ee3"} Jan 31 15:02:51 crc kubenswrapper[4751]: I0131 15:02:51.882978 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"00ce2535-6386-444d-8bbd-abded7935ebf","Type":"ContainerStarted","Data":"ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b"} Jan 31 15:02:51 crc kubenswrapper[4751]: I0131 15:02:51.883003 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"00ce2535-6386-444d-8bbd-abded7935ebf","Type":"ContainerStarted","Data":"6f4bddbaf0148a369cf2bebe12727f16d7417ca9251868a8567b9cbe7ad7cc1d"} Jan 31 15:02:51 crc kubenswrapper[4751]: I0131 15:02:51.910543 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=1.910519046 podStartE2EDuration="1.910519046s" podCreationTimestamp="2026-01-31 15:02:50 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:02:51.901704943 +0000 UTC m=+1276.276417828" watchObservedRunningTime="2026-01-31 15:02:51.910519046 +0000 UTC m=+1276.285231951" Jan 31 15:02:52 crc kubenswrapper[4751]: I0131 15:02:52.891830 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"00ce2535-6386-444d-8bbd-abded7935ebf","Type":"ContainerStarted","Data":"8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1"} Jan 31 15:02:52 crc kubenswrapper[4751]: I0131 15:02:52.891882 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="00ce2535-6386-444d-8bbd-abded7935ebf" containerName="glance-log" containerID="cri-o://ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b" gracePeriod=30 Jan 31 15:02:52 crc kubenswrapper[4751]: I0131 15:02:52.891994 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="00ce2535-6386-444d-8bbd-abded7935ebf" containerName="glance-httpd" containerID="cri-o://8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1" gracePeriod=30 Jan 31 15:02:52 crc kubenswrapper[4751]: I0131 15:02:52.926356 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=3.926336553 podStartE2EDuration="3.926336553s" podCreationTimestamp="2026-01-31 15:02:49 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:02:52.920374176 +0000 UTC m=+1277.295087081" watchObservedRunningTime="2026-01-31 15:02:52.926336553 +0000 UTC m=+1277.301049438" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.313510 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.351984 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-var-locks-brick\") pod \"00ce2535-6386-444d-8bbd-abded7935ebf\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.352044 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00ce2535-6386-444d-8bbd-abded7935ebf-scripts\") pod \"00ce2535-6386-444d-8bbd-abded7935ebf\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.352119 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-sys\") pod \"00ce2535-6386-444d-8bbd-abded7935ebf\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.352145 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-dev\") pod \"00ce2535-6386-444d-8bbd-abded7935ebf\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.352176 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"00ce2535-6386-444d-8bbd-abded7935ebf\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.352193 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-etc-nvme\") pod \"00ce2535-6386-444d-8bbd-abded7935ebf\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.352223 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-lib-modules\") pod \"00ce2535-6386-444d-8bbd-abded7935ebf\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.352243 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"00ce2535-6386-444d-8bbd-abded7935ebf\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.352260 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vwdrl\" (UniqueName: \"kubernetes.io/projected/00ce2535-6386-444d-8bbd-abded7935ebf-kube-api-access-vwdrl\") pod \"00ce2535-6386-444d-8bbd-abded7935ebf\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.352295 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-etc-iscsi\") pod \"00ce2535-6386-444d-8bbd-abded7935ebf\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.352335 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00ce2535-6386-444d-8bbd-abded7935ebf-httpd-run\") pod \"00ce2535-6386-444d-8bbd-abded7935ebf\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.352397 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00ce2535-6386-444d-8bbd-abded7935ebf-config-data\") pod \"00ce2535-6386-444d-8bbd-abded7935ebf\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.352424 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00ce2535-6386-444d-8bbd-abded7935ebf-logs\") pod \"00ce2535-6386-444d-8bbd-abded7935ebf\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.352438 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-run\") pod \"00ce2535-6386-444d-8bbd-abded7935ebf\" (UID: \"00ce2535-6386-444d-8bbd-abded7935ebf\") " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.352721 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-run" (OuterVolumeSpecName: "run") pod "00ce2535-6386-444d-8bbd-abded7935ebf" (UID: "00ce2535-6386-444d-8bbd-abded7935ebf"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.352751 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "00ce2535-6386-444d-8bbd-abded7935ebf" (UID: "00ce2535-6386-444d-8bbd-abded7935ebf"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.357796 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-sys" (OuterVolumeSpecName: "sys") pod "00ce2535-6386-444d-8bbd-abded7935ebf" (UID: "00ce2535-6386-444d-8bbd-abded7935ebf"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.357849 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-dev" (OuterVolumeSpecName: "dev") pod "00ce2535-6386-444d-8bbd-abded7935ebf" (UID: "00ce2535-6386-444d-8bbd-abded7935ebf"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.359234 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "00ce2535-6386-444d-8bbd-abded7935ebf" (UID: "00ce2535-6386-444d-8bbd-abded7935ebf"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.359335 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "00ce2535-6386-444d-8bbd-abded7935ebf" (UID: "00ce2535-6386-444d-8bbd-abded7935ebf"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.359478 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "00ce2535-6386-444d-8bbd-abded7935ebf" (UID: "00ce2535-6386-444d-8bbd-abded7935ebf"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.359591 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00ce2535-6386-444d-8bbd-abded7935ebf-scripts" (OuterVolumeSpecName: "scripts") pod "00ce2535-6386-444d-8bbd-abded7935ebf" (UID: "00ce2535-6386-444d-8bbd-abded7935ebf"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.359826 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00ce2535-6386-444d-8bbd-abded7935ebf-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "00ce2535-6386-444d-8bbd-abded7935ebf" (UID: "00ce2535-6386-444d-8bbd-abded7935ebf"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.359925 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/00ce2535-6386-444d-8bbd-abded7935ebf-logs" (OuterVolumeSpecName: "logs") pod "00ce2535-6386-444d-8bbd-abded7935ebf" (UID: "00ce2535-6386-444d-8bbd-abded7935ebf"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.361405 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage17-crc" (OuterVolumeSpecName: "glance") pod "00ce2535-6386-444d-8bbd-abded7935ebf" (UID: "00ce2535-6386-444d-8bbd-abded7935ebf"). InnerVolumeSpecName "local-storage17-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.362264 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance-cache") pod "00ce2535-6386-444d-8bbd-abded7935ebf" (UID: "00ce2535-6386-444d-8bbd-abded7935ebf"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.364143 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/00ce2535-6386-444d-8bbd-abded7935ebf-kube-api-access-vwdrl" (OuterVolumeSpecName: "kube-api-access-vwdrl") pod "00ce2535-6386-444d-8bbd-abded7935ebf" (UID: "00ce2535-6386-444d-8bbd-abded7935ebf"). InnerVolumeSpecName "kube-api-access-vwdrl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.400631 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/00ce2535-6386-444d-8bbd-abded7935ebf-config-data" (OuterVolumeSpecName: "config-data") pod "00ce2535-6386-444d-8bbd-abded7935ebf" (UID: "00ce2535-6386-444d-8bbd-abded7935ebf"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.454659 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.454691 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/00ce2535-6386-444d-8bbd-abded7935ebf-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.454700 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.454708 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.454726 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.454737 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.454745 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.454757 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.454767 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vwdrl\" (UniqueName: \"kubernetes.io/projected/00ce2535-6386-444d-8bbd-abded7935ebf-kube-api-access-vwdrl\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.454776 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.454784 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/00ce2535-6386-444d-8bbd-abded7935ebf-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.454792 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/00ce2535-6386-444d-8bbd-abded7935ebf-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.454801 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/00ce2535-6386-444d-8bbd-abded7935ebf-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.454809 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/00ce2535-6386-444d-8bbd-abded7935ebf-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.471259 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage17-crc" (UniqueName: "kubernetes.io/local-volume/local-storage17-crc") on node "crc" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.471363 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.556094 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.556128 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.907030 4751 generic.go:334] "Generic (PLEG): container finished" podID="00ce2535-6386-444d-8bbd-abded7935ebf" containerID="8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1" exitCode=143 Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.908317 4751 generic.go:334] "Generic (PLEG): container finished" podID="00ce2535-6386-444d-8bbd-abded7935ebf" containerID="ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b" exitCode=143 Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.907223 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.907126 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"00ce2535-6386-444d-8bbd-abded7935ebf","Type":"ContainerDied","Data":"8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1"} Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.908642 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"00ce2535-6386-444d-8bbd-abded7935ebf","Type":"ContainerDied","Data":"ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b"} Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.908669 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"00ce2535-6386-444d-8bbd-abded7935ebf","Type":"ContainerDied","Data":"6f4bddbaf0148a369cf2bebe12727f16d7417ca9251868a8567b9cbe7ad7cc1d"} Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.908705 4751 scope.go:117] "RemoveContainer" containerID="8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.938582 4751 scope.go:117] "RemoveContainer" containerID="ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.965175 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.975485 4751 scope.go:117] "RemoveContainer" containerID="8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1" Jan 31 15:02:53 crc kubenswrapper[4751]: E0131 15:02:53.975987 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1\": container with ID starting with 8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1 not found: ID does not exist" containerID="8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.976057 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1"} err="failed to get container status \"8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1\": rpc error: code = NotFound desc = could not find container \"8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1\": container with ID starting with 8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1 not found: ID does not exist" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.976107 4751 scope.go:117] "RemoveContainer" containerID="ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b" Jan 31 15:02:53 crc kubenswrapper[4751]: E0131 15:02:53.976459 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b\": container with ID starting with ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b not found: ID does not exist" containerID="ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.976498 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b"} err="failed to get container status \"ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b\": rpc error: code = NotFound desc = could not find container \"ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b\": container with ID starting with ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b not found: ID does not exist" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.976524 4751 scope.go:117] "RemoveContainer" containerID="8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.976795 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1"} err="failed to get container status \"8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1\": rpc error: code = NotFound desc = could not find container \"8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1\": container with ID starting with 8320b4bf6cf3688e90f70ba1e9d0543cf0c4236c9baf7a9f5afd1fdd87cde7f1 not found: ID does not exist" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.976816 4751 scope.go:117] "RemoveContainer" containerID="ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b" Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.977060 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:53 crc kubenswrapper[4751]: I0131 15:02:53.977253 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b"} err="failed to get container status \"ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b\": rpc error: code = NotFound desc = could not find container \"ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b\": container with ID starting with ae2fb2bf76086e3824ffca58eb1d7c7c332b7d443fb93e1343b97051d794f81b not found: ID does not exist" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.003688 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:54 crc kubenswrapper[4751]: E0131 15:02:54.003978 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ce2535-6386-444d-8bbd-abded7935ebf" containerName="glance-httpd" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.003992 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ce2535-6386-444d-8bbd-abded7935ebf" containerName="glance-httpd" Jan 31 15:02:54 crc kubenswrapper[4751]: E0131 15:02:54.004028 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="00ce2535-6386-444d-8bbd-abded7935ebf" containerName="glance-log" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.004038 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="00ce2535-6386-444d-8bbd-abded7935ebf" containerName="glance-log" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.004291 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ce2535-6386-444d-8bbd-abded7935ebf" containerName="glance-log" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.004333 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="00ce2535-6386-444d-8bbd-abded7935ebf" containerName="glance-httpd" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.005358 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.008730 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.025443 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.066748 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kxgsq\" (UniqueName: \"kubernetes.io/projected/1c3cde72-72a2-4a51-a061-06397061de3c-kube-api-access-kxgsq\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.067140 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.067173 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.067302 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.067363 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3cde72-72a2-4a51-a061-06397061de3c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.067420 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.067481 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-run\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.067520 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-dev\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.067622 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-sys\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.067655 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c3cde72-72a2-4a51-a061-06397061de3c-logs\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.068518 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c3cde72-72a2-4a51-a061-06397061de3c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.068622 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.068738 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c3cde72-72a2-4a51-a061-06397061de3c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.068812 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.170342 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c3cde72-72a2-4a51-a061-06397061de3c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.170418 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.170460 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c3cde72-72a2-4a51-a061-06397061de3c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.170495 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.170561 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kxgsq\" (UniqueName: \"kubernetes.io/projected/1c3cde72-72a2-4a51-a061-06397061de3c-kube-api-access-kxgsq\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.170588 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.170608 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.170629 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.170649 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3cde72-72a2-4a51-a061-06397061de3c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.170674 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.170703 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-run\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.170737 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-dev\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.170780 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-sys\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.170802 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c3cde72-72a2-4a51-a061-06397061de3c-logs\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.170962 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c3cde72-72a2-4a51-a061-06397061de3c-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.171039 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.171106 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.171142 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.171171 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-run\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.171291 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.171308 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c3cde72-72a2-4a51-a061-06397061de3c-logs\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.171349 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-sys\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.171343 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-dev\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.171441 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.172354 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") device mount path \"/mnt/openstack/pv17\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.185816 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c3cde72-72a2-4a51-a061-06397061de3c-scripts\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.187234 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3cde72-72a2-4a51-a061-06397061de3c-config-data\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.188874 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kxgsq\" (UniqueName: \"kubernetes.io/projected/1c3cde72-72a2-4a51-a061-06397061de3c-kube-api-access-kxgsq\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.195596 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.212371 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-internal-api-0\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.325290 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.429250 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="00ce2535-6386-444d-8bbd-abded7935ebf" path="/var/lib/kubelet/pods/00ce2535-6386-444d-8bbd-abded7935ebf/volumes" Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.774560 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:02:54 crc kubenswrapper[4751]: W0131 15:02:54.778952 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod1c3cde72_72a2_4a51_a061_06397061de3c.slice/crio-c898b2da5714126a65ee741ceb3ed33e63dd1016489a4c5e916d0a834f254ea4 WatchSource:0}: Error finding container c898b2da5714126a65ee741ceb3ed33e63dd1016489a4c5e916d0a834f254ea4: Status 404 returned error can't find the container with id c898b2da5714126a65ee741ceb3ed33e63dd1016489a4c5e916d0a834f254ea4 Jan 31 15:02:54 crc kubenswrapper[4751]: I0131 15:02:54.916240 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"1c3cde72-72a2-4a51-a061-06397061de3c","Type":"ContainerStarted","Data":"c898b2da5714126a65ee741ceb3ed33e63dd1016489a4c5e916d0a834f254ea4"} Jan 31 15:02:55 crc kubenswrapper[4751]: I0131 15:02:55.924493 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"1c3cde72-72a2-4a51-a061-06397061de3c","Type":"ContainerStarted","Data":"6735f01e0fcf3a4be1978348937bbe37355d165dbd9469a97f61438e89b09635"} Jan 31 15:02:55 crc kubenswrapper[4751]: I0131 15:02:55.924995 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"1c3cde72-72a2-4a51-a061-06397061de3c","Type":"ContainerStarted","Data":"5f91babe44ec234dd474a96b96ebc6224d6c3a4f5e9aa9658f789a37025d6480"} Jan 31 15:02:55 crc kubenswrapper[4751]: I0131 15:02:55.956547 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.95651657 podStartE2EDuration="2.95651657s" podCreationTimestamp="2026-01-31 15:02:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:02:55.942821799 +0000 UTC m=+1280.317534704" watchObservedRunningTime="2026-01-31 15:02:55.95651657 +0000 UTC m=+1280.331229495" Jan 31 15:03:00 crc kubenswrapper[4751]: I0131 15:03:00.463936 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:03:00 crc kubenswrapper[4751]: I0131 15:03:00.464494 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:03:00 crc kubenswrapper[4751]: I0131 15:03:00.491698 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:03:00 crc kubenswrapper[4751]: I0131 15:03:00.519371 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:03:00 crc kubenswrapper[4751]: I0131 15:03:00.968349 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:03:00 crc kubenswrapper[4751]: I0131 15:03:00.968855 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:03:02 crc kubenswrapper[4751]: I0131 15:03:02.815431 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:03:02 crc kubenswrapper[4751]: I0131 15:03:02.842379 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:03:04 crc kubenswrapper[4751]: I0131 15:03:04.325862 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:03:04 crc kubenswrapper[4751]: I0131 15:03:04.326194 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:03:04 crc kubenswrapper[4751]: I0131 15:03:04.399088 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:03:04 crc kubenswrapper[4751]: I0131 15:03:04.498283 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:03:05 crc kubenswrapper[4751]: I0131 15:03:05.001513 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:03:05 crc kubenswrapper[4751]: I0131 15:03:05.001589 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:03:06 crc kubenswrapper[4751]: I0131 15:03:06.859757 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:03:06 crc kubenswrapper[4751]: I0131 15:03:06.922704 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.031667 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.032989 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.043756 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.045996 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.050501 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.078876 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.129230 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.129280 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2142d4ca-115a-49b7-8f50-ac020fdbc342-scripts\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.129302 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.129405 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-dev\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.129434 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2142d4ca-115a-49b7-8f50-ac020fdbc342-config-data\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.129454 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.129472 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-run\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.129495 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.129513 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.129666 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.129715 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.129739 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.129834 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.129895 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75pr7\" (UniqueName: \"kubernetes.io/projected/2142d4ca-115a-49b7-8f50-ac020fdbc342-kube-api-access-75pr7\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.129958 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2142d4ca-115a-49b7-8f50-ac020fdbc342-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.130000 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.130020 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.130040 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40930074-48c4-404d-a55c-bb8a4f581f56-logs\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.130060 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-dev\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.130090 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.130112 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2142d4ca-115a-49b7-8f50-ac020fdbc342-logs\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.130148 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40930074-48c4-404d-a55c-bb8a4f581f56-scripts\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.130169 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rnnkf\" (UniqueName: \"kubernetes.io/projected/40930074-48c4-404d-a55c-bb8a4f581f56-kube-api-access-rnnkf\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.130184 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-sys\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.130212 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40930074-48c4-404d-a55c-bb8a4f581f56-config-data\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.130225 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-run\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.130250 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40930074-48c4-404d-a55c-bb8a4f581f56-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.130264 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-sys\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.148830 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.150006 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.157717 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.158908 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.167380 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.175175 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231289 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-sys\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231344 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231372 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-config-data\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231392 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231411 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231429 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231428 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-sys\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231445 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231479 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231514 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-run\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231541 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231546 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-etc-iscsi\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231566 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2142d4ca-115a-49b7-8f50-ac020fdbc342-scripts\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231588 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231771 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231815 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231847 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231895 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-dev\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231928 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j7jg\" (UniqueName: \"kubernetes.io/projected/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-kube-api-access-7j7jg\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231950 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2142d4ca-115a-49b7-8f50-ac020fdbc342-config-data\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231968 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-scripts\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.231993 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232012 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232035 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-sys\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232042 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-dev\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232059 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-run\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232109 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232125 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") device mount path \"/mnt/openstack/pv03\"" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232156 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") device mount path \"/mnt/openstack/pv04\"" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232246 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-run\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232300 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-var-locks-brick\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232129 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232393 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63d398be-aa92-4a00-933b-549a0c4e4ad7-scripts\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232414 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232445 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-run\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232477 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232502 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232533 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232556 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75pr7\" (UniqueName: \"kubernetes.io/projected/2142d4ca-115a-49b7-8f50-ac020fdbc342-kube-api-access-75pr7\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232625 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-etc-nvme\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232662 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-lib-modules\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232715 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232829 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2142d4ca-115a-49b7-8f50-ac020fdbc342-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232846 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232864 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-dev\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232887 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232907 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pdj2p\" (UniqueName: \"kubernetes.io/projected/63d398be-aa92-4a00-933b-549a0c4e4ad7-kube-api-access-pdj2p\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232932 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40930074-48c4-404d-a55c-bb8a4f581f56-logs\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232939 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.232946 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63d398be-aa92-4a00-933b-549a0c4e4ad7-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233005 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-dev\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233033 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233066 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2142d4ca-115a-49b7-8f50-ac020fdbc342-logs\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233109 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233135 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-dev\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233169 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233199 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40930074-48c4-404d-a55c-bb8a4f581f56-scripts\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233221 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233239 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-logs\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233254 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233267 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rnnkf\" (UniqueName: \"kubernetes.io/projected/40930074-48c4-404d-a55c-bb8a4f581f56-kube-api-access-rnnkf\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233261 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2142d4ca-115a-49b7-8f50-ac020fdbc342-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233300 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233321 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-sys\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233329 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-dev\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233339 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63d398be-aa92-4a00-933b-549a0c4e4ad7-logs\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233361 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233367 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-sys\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233396 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40930074-48c4-404d-a55c-bb8a4f581f56-config-data\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233420 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-run\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233440 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d398be-aa92-4a00-933b-549a0c4e4ad7-config-data\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233485 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40930074-48c4-404d-a55c-bb8a4f581f56-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233528 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40930074-48c4-404d-a55c-bb8a4f581f56-logs\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233629 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2142d4ca-115a-49b7-8f50-ac020fdbc342-logs\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233868 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40930074-48c4-404d-a55c-bb8a4f581f56-httpd-run\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233872 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-sys\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233906 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-run\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.233974 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.238388 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40930074-48c4-404d-a55c-bb8a4f581f56-scripts\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.239565 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40930074-48c4-404d-a55c-bb8a4f581f56-config-data\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.253886 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2142d4ca-115a-49b7-8f50-ac020fdbc342-scripts\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.260872 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2142d4ca-115a-49b7-8f50-ac020fdbc342-config-data\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.264849 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.267315 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rnnkf\" (UniqueName: \"kubernetes.io/projected/40930074-48c4-404d-a55c-bb8a4f581f56-kube-api-access-rnnkf\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.269270 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75pr7\" (UniqueName: \"kubernetes.io/projected/2142d4ca-115a-49b7-8f50-ac020fdbc342-kube-api-access-75pr7\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.278010 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"glance-default-external-api-1\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.278929 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.287922 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-external-api-2\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.334688 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.334753 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-config-data\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.334773 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.334791 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.334808 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.334823 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.334845 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-run\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.334862 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.334877 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.334893 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.334909 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.334940 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7j7jg\" (UniqueName: \"kubernetes.io/projected/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-kube-api-access-7j7jg\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.334961 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-scripts\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.334988 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335008 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-sys\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335029 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63d398be-aa92-4a00-933b-549a0c4e4ad7-scripts\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335051 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-run\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335112 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-dev\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335130 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pdj2p\" (UniqueName: \"kubernetes.io/projected/63d398be-aa92-4a00-933b-549a0c4e4ad7-kube-api-access-pdj2p\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335147 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63d398be-aa92-4a00-933b-549a0c4e4ad7-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335172 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335188 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-dev\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335207 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335228 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-logs\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335244 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335258 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-sys\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335273 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63d398be-aa92-4a00-933b-549a0c4e4ad7-logs\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335293 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d398be-aa92-4a00-933b-549a0c4e4ad7-config-data\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335433 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-var-locks-brick\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335921 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-run\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335948 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-httpd-run\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.335975 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.336155 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.336128 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") device mount path \"/mnt/openstack/pv19\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.337499 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.337543 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-dev\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.337577 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-run\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.337650 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-sys\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.338709 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.338812 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") device mount path \"/mnt/openstack/pv07\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.339005 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d398be-aa92-4a00-933b-549a0c4e4ad7-config-data\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.339087 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-dev\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.339132 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-etc-nvme\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.339394 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-logs\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.339441 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-lib-modules\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.339470 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-sys\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.339522 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63d398be-aa92-4a00-933b-549a0c4e4ad7-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.339496 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") device mount path \"/mnt/openstack/pv20\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.336057 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") device mount path \"/mnt/openstack/pv09\"" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.339657 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-etc-iscsi\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.339782 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63d398be-aa92-4a00-933b-549a0c4e4ad7-logs\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.343309 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63d398be-aa92-4a00-933b-549a0c4e4ad7-scripts\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.343576 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-scripts\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.349911 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-config-data\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.351403 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.354486 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pdj2p\" (UniqueName: \"kubernetes.io/projected/63d398be-aa92-4a00-933b-549a0c4e4ad7-kube-api-access-pdj2p\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.359258 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j7jg\" (UniqueName: \"kubernetes.io/projected/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-kube-api-access-7j7jg\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.359848 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.364011 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.370374 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.374280 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"glance-default-internal-api-2\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.383903 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-internal-api-1\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.477048 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.487698 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.777765 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 15:03:09 crc kubenswrapper[4751]: W0131 15:03:09.779048 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod40930074_48c4_404d_a55c_bb8a4f581f56.slice/crio-95584e8f76d354aa1bb1539546392e64aa9a3d998c839b11c54c3e4e4b46195b WatchSource:0}: Error finding container 95584e8f76d354aa1bb1539546392e64aa9a3d998c839b11c54c3e4e4b46195b: Status 404 returned error can't find the container with id 95584e8f76d354aa1bb1539546392e64aa9a3d998c839b11c54c3e4e4b46195b Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.851918 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:03:09 crc kubenswrapper[4751]: W0131 15:03:09.918496 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf3d6d7db_fc12_479e_aedf_8ef829bf01e5.slice/crio-cc8b1dd3b31a488ef9b0862ddc7dae65875f8b41c155c15581f698c10d6ef4dd WatchSource:0}: Error finding container cc8b1dd3b31a488ef9b0862ddc7dae65875f8b41c155c15581f698c10d6ef4dd: Status 404 returned error can't find the container with id cc8b1dd3b31a488ef9b0862ddc7dae65875f8b41c155c15581f698c10d6ef4dd Jan 31 15:03:09 crc kubenswrapper[4751]: I0131 15:03:09.919231 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 15:03:10 crc kubenswrapper[4751]: I0131 15:03:10.003260 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:03:10 crc kubenswrapper[4751]: I0131 15:03:10.041884 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"63d398be-aa92-4a00-933b-549a0c4e4ad7","Type":"ContainerStarted","Data":"af5a8366873c62793cd928a263608e14d01ee8087ae27093c452690e6adc2f31"} Jan 31 15:03:10 crc kubenswrapper[4751]: I0131 15:03:10.043817 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2142d4ca-115a-49b7-8f50-ac020fdbc342","Type":"ContainerStarted","Data":"9cf14d4a3826deef636e4c06d5c8aee7e5d6320538dd47d4daa7da644059233a"} Jan 31 15:03:10 crc kubenswrapper[4751]: I0131 15:03:10.043859 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2142d4ca-115a-49b7-8f50-ac020fdbc342","Type":"ContainerStarted","Data":"5f1a0e7c6277e92312ce6862469a84b79e4f876e98e63b342abc9b0fa8fe5418"} Jan 31 15:03:10 crc kubenswrapper[4751]: I0131 15:03:10.048925 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"40930074-48c4-404d-a55c-bb8a4f581f56","Type":"ContainerStarted","Data":"566c7ba296d8a89d8b54dce02f2ecb937ecdc55998a198c89a4aa08ceab608d9"} Jan 31 15:03:10 crc kubenswrapper[4751]: I0131 15:03:10.048974 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"40930074-48c4-404d-a55c-bb8a4f581f56","Type":"ContainerStarted","Data":"95584e8f76d354aa1bb1539546392e64aa9a3d998c839b11c54c3e4e4b46195b"} Jan 31 15:03:10 crc kubenswrapper[4751]: I0131 15:03:10.051051 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"f3d6d7db-fc12-479e-aedf-8ef829bf01e5","Type":"ContainerStarted","Data":"cc8b1dd3b31a488ef9b0862ddc7dae65875f8b41c155c15581f698c10d6ef4dd"} Jan 31 15:03:11 crc kubenswrapper[4751]: I0131 15:03:11.060276 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"40930074-48c4-404d-a55c-bb8a4f581f56","Type":"ContainerStarted","Data":"16e5e639ab2ee9cd835695e29e8846f4088bef1a2b7a0d28e0303930d813839c"} Jan 31 15:03:11 crc kubenswrapper[4751]: I0131 15:03:11.062532 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"f3d6d7db-fc12-479e-aedf-8ef829bf01e5","Type":"ContainerStarted","Data":"1dc237ef9e49d030bd026bc0cf53f0f8764a60860c14efa8846ee1aaa6f4fde8"} Jan 31 15:03:11 crc kubenswrapper[4751]: I0131 15:03:11.062626 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"f3d6d7db-fc12-479e-aedf-8ef829bf01e5","Type":"ContainerStarted","Data":"9a94ed6a4818186063a21a203b752d947f2a19e75f8bff027ed517efba40515f"} Jan 31 15:03:11 crc kubenswrapper[4751]: I0131 15:03:11.064588 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"63d398be-aa92-4a00-933b-549a0c4e4ad7","Type":"ContainerStarted","Data":"9c261553a8935996c28e0989101c6aa4dfa134fe2060a0f6d58b3c2531dcdda7"} Jan 31 15:03:11 crc kubenswrapper[4751]: I0131 15:03:11.064662 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"63d398be-aa92-4a00-933b-549a0c4e4ad7","Type":"ContainerStarted","Data":"136f5cdff8be1979c81bded97141738e6801ed16ea2d04c80f29ec3512279160"} Jan 31 15:03:11 crc kubenswrapper[4751]: I0131 15:03:11.066493 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2142d4ca-115a-49b7-8f50-ac020fdbc342","Type":"ContainerStarted","Data":"dea90ac64b7d37ae389ad079e16e4c6e1e9d9d925e5e168a9ae5bb17022b8317"} Jan 31 15:03:11 crc kubenswrapper[4751]: I0131 15:03:11.115076 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-2" podStartSLOduration=4.115051404 podStartE2EDuration="4.115051404s" podCreationTimestamp="2026-01-31 15:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:11.08302054 +0000 UTC m=+1295.457733425" watchObservedRunningTime="2026-01-31 15:03:11.115051404 +0000 UTC m=+1295.489764289" Jan 31 15:03:11 crc kubenswrapper[4751]: I0131 15:03:11.115438 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-1" podStartSLOduration=4.115431384 podStartE2EDuration="4.115431384s" podCreationTimestamp="2026-01-31 15:03:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:11.108623805 +0000 UTC m=+1295.483336750" watchObservedRunningTime="2026-01-31 15:03:11.115431384 +0000 UTC m=+1295.490144269" Jan 31 15:03:11 crc kubenswrapper[4751]: I0131 15:03:11.141252 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-2" podStartSLOduration=3.141226914 podStartE2EDuration="3.141226914s" podCreationTimestamp="2026-01-31 15:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:11.13920198 +0000 UTC m=+1295.513914875" watchObservedRunningTime="2026-01-31 15:03:11.141226914 +0000 UTC m=+1295.515939829" Jan 31 15:03:11 crc kubenswrapper[4751]: I0131 15:03:11.173819 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=3.173796462 podStartE2EDuration="3.173796462s" podCreationTimestamp="2026-01-31 15:03:08 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:11.168905613 +0000 UTC m=+1295.543618548" watchObservedRunningTime="2026-01-31 15:03:11.173796462 +0000 UTC m=+1295.548509367" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.352851 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.353391 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.371370 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.373106 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.377330 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.395646 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.397554 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.413987 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.477929 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.477975 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.487875 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.488422 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.505540 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.525374 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.529823 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:19 crc kubenswrapper[4751]: I0131 15:03:19.554442 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:20 crc kubenswrapper[4751]: I0131 15:03:20.145799 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:20 crc kubenswrapper[4751]: I0131 15:03:20.146244 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:20 crc kubenswrapper[4751]: I0131 15:03:20.146311 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:20 crc kubenswrapper[4751]: I0131 15:03:20.146393 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:20 crc kubenswrapper[4751]: I0131 15:03:20.146455 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:20 crc kubenswrapper[4751]: I0131 15:03:20.146513 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:20 crc kubenswrapper[4751]: I0131 15:03:20.146575 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:20 crc kubenswrapper[4751]: I0131 15:03:20.146638 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:22 crc kubenswrapper[4751]: I0131 15:03:22.007312 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:22 crc kubenswrapper[4751]: I0131 15:03:22.012448 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:22 crc kubenswrapper[4751]: I0131 15:03:22.069913 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:22 crc kubenswrapper[4751]: I0131 15:03:22.116793 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:22 crc kubenswrapper[4751]: I0131 15:03:22.131551 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:22 crc kubenswrapper[4751]: I0131 15:03:22.158199 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 15:03:22 crc kubenswrapper[4751]: I0131 15:03:22.158198 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 15:03:22 crc kubenswrapper[4751]: I0131 15:03:22.158249 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 15:03:22 crc kubenswrapper[4751]: I0131 15:03:22.200671 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:22 crc kubenswrapper[4751]: I0131 15:03:22.231564 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:22 crc kubenswrapper[4751]: I0131 15:03:22.392342 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:23 crc kubenswrapper[4751]: I0131 15:03:23.714791 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 15:03:23 crc kubenswrapper[4751]: I0131 15:03:23.726978 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:03:23 crc kubenswrapper[4751]: I0131 15:03:23.879574 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 15:03:23 crc kubenswrapper[4751]: I0131 15:03:23.895225 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:03:24 crc kubenswrapper[4751]: I0131 15:03:24.171626 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="2142d4ca-115a-49b7-8f50-ac020fdbc342" containerName="glance-log" containerID="cri-o://9cf14d4a3826deef636e4c06d5c8aee7e5d6320538dd47d4daa7da644059233a" gracePeriod=30 Jan 31 15:03:24 crc kubenswrapper[4751]: I0131 15:03:24.171734 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="2142d4ca-115a-49b7-8f50-ac020fdbc342" containerName="glance-httpd" containerID="cri-o://dea90ac64b7d37ae389ad079e16e4c6e1e9d9d925e5e168a9ae5bb17022b8317" gracePeriod=30 Jan 31 15:03:24 crc kubenswrapper[4751]: I0131 15:03:24.172865 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="40930074-48c4-404d-a55c-bb8a4f581f56" containerName="glance-log" containerID="cri-o://566c7ba296d8a89d8b54dce02f2ecb937ecdc55998a198c89a4aa08ceab608d9" gracePeriod=30 Jan 31 15:03:24 crc kubenswrapper[4751]: I0131 15:03:24.172895 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-2" podUID="40930074-48c4-404d-a55c-bb8a4f581f56" containerName="glance-httpd" containerID="cri-o://16e5e639ab2ee9cd835695e29e8846f4088bef1a2b7a0d28e0303930d813839c" gracePeriod=30 Jan 31 15:03:25 crc kubenswrapper[4751]: I0131 15:03:25.182212 4751 generic.go:334] "Generic (PLEG): container finished" podID="40930074-48c4-404d-a55c-bb8a4f581f56" containerID="566c7ba296d8a89d8b54dce02f2ecb937ecdc55998a198c89a4aa08ceab608d9" exitCode=143 Jan 31 15:03:25 crc kubenswrapper[4751]: I0131 15:03:25.182320 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"40930074-48c4-404d-a55c-bb8a4f581f56","Type":"ContainerDied","Data":"566c7ba296d8a89d8b54dce02f2ecb937ecdc55998a198c89a4aa08ceab608d9"} Jan 31 15:03:25 crc kubenswrapper[4751]: I0131 15:03:25.184819 4751 generic.go:334] "Generic (PLEG): container finished" podID="2142d4ca-115a-49b7-8f50-ac020fdbc342" containerID="9cf14d4a3826deef636e4c06d5c8aee7e5d6320538dd47d4daa7da644059233a" exitCode=143 Jan 31 15:03:25 crc kubenswrapper[4751]: I0131 15:03:25.184907 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2142d4ca-115a-49b7-8f50-ac020fdbc342","Type":"ContainerDied","Data":"9cf14d4a3826deef636e4c06d5c8aee7e5d6320538dd47d4daa7da644059233a"} Jan 31 15:03:25 crc kubenswrapper[4751]: I0131 15:03:25.185237 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="f3d6d7db-fc12-479e-aedf-8ef829bf01e5" containerName="glance-log" containerID="cri-o://9a94ed6a4818186063a21a203b752d947f2a19e75f8bff027ed517efba40515f" gracePeriod=30 Jan 31 15:03:25 crc kubenswrapper[4751]: I0131 15:03:25.185314 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-2" podUID="f3d6d7db-fc12-479e-aedf-8ef829bf01e5" containerName="glance-httpd" containerID="cri-o://1dc237ef9e49d030bd026bc0cf53f0f8764a60860c14efa8846ee1aaa6f4fde8" gracePeriod=30 Jan 31 15:03:25 crc kubenswrapper[4751]: I0131 15:03:25.185617 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="63d398be-aa92-4a00-933b-549a0c4e4ad7" containerName="glance-log" containerID="cri-o://136f5cdff8be1979c81bded97141738e6801ed16ea2d04c80f29ec3512279160" gracePeriod=30 Jan 31 15:03:25 crc kubenswrapper[4751]: I0131 15:03:25.185681 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="63d398be-aa92-4a00-933b-549a0c4e4ad7" containerName="glance-httpd" containerID="cri-o://9c261553a8935996c28e0989101c6aa4dfa134fe2060a0f6d58b3c2531dcdda7" gracePeriod=30 Jan 31 15:03:26 crc kubenswrapper[4751]: I0131 15:03:26.194850 4751 generic.go:334] "Generic (PLEG): container finished" podID="f3d6d7db-fc12-479e-aedf-8ef829bf01e5" containerID="9a94ed6a4818186063a21a203b752d947f2a19e75f8bff027ed517efba40515f" exitCode=143 Jan 31 15:03:26 crc kubenswrapper[4751]: I0131 15:03:26.194947 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"f3d6d7db-fc12-479e-aedf-8ef829bf01e5","Type":"ContainerDied","Data":"9a94ed6a4818186063a21a203b752d947f2a19e75f8bff027ed517efba40515f"} Jan 31 15:03:26 crc kubenswrapper[4751]: I0131 15:03:26.199221 4751 generic.go:334] "Generic (PLEG): container finished" podID="63d398be-aa92-4a00-933b-549a0c4e4ad7" containerID="136f5cdff8be1979c81bded97141738e6801ed16ea2d04c80f29ec3512279160" exitCode=143 Jan 31 15:03:26 crc kubenswrapper[4751]: I0131 15:03:26.199275 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"63d398be-aa92-4a00-933b-549a0c4e4ad7","Type":"ContainerDied","Data":"136f5cdff8be1979c81bded97141738e6801ed16ea2d04c80f29ec3512279160"} Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.796450 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.800858 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.840583 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-etc-nvme\") pod \"2142d4ca-115a-49b7-8f50-ac020fdbc342\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.840631 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") pod \"2142d4ca-115a-49b7-8f50-ac020fdbc342\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.840668 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40930074-48c4-404d-a55c-bb8a4f581f56-logs\") pod \"40930074-48c4-404d-a55c-bb8a4f581f56\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.840692 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-dev\") pod \"2142d4ca-115a-49b7-8f50-ac020fdbc342\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.840724 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-var-locks-brick\") pod \"40930074-48c4-404d-a55c-bb8a4f581f56\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.840738 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "2142d4ca-115a-49b7-8f50-ac020fdbc342" (UID: "2142d4ca-115a-49b7-8f50-ac020fdbc342"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.840755 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2142d4ca-115a-49b7-8f50-ac020fdbc342-httpd-run\") pod \"2142d4ca-115a-49b7-8f50-ac020fdbc342\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.840852 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-etc-iscsi\") pod \"40930074-48c4-404d-a55c-bb8a4f581f56\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.840889 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-etc-iscsi\") pod \"2142d4ca-115a-49b7-8f50-ac020fdbc342\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.840943 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-dev" (OuterVolumeSpecName: "dev") pod "2142d4ca-115a-49b7-8f50-ac020fdbc342" (UID: "2142d4ca-115a-49b7-8f50-ac020fdbc342"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.840955 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "40930074-48c4-404d-a55c-bb8a4f581f56" (UID: "40930074-48c4-404d-a55c-bb8a4f581f56"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.840987 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "40930074-48c4-404d-a55c-bb8a4f581f56" (UID: "40930074-48c4-404d-a55c-bb8a4f581f56"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.841037 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2142d4ca-115a-49b7-8f50-ac020fdbc342-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2142d4ca-115a-49b7-8f50-ac020fdbc342" (UID: "2142d4ca-115a-49b7-8f50-ac020fdbc342"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.841085 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "2142d4ca-115a-49b7-8f50-ac020fdbc342" (UID: "2142d4ca-115a-49b7-8f50-ac020fdbc342"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.841297 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40930074-48c4-404d-a55c-bb8a4f581f56-logs" (OuterVolumeSpecName: "logs") pod "40930074-48c4-404d-a55c-bb8a4f581f56" (UID: "40930074-48c4-404d-a55c-bb8a4f581f56"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.841371 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2142d4ca-115a-49b7-8f50-ac020fdbc342-logs\") pod \"2142d4ca-115a-49b7-8f50-ac020fdbc342\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.841618 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2142d4ca-115a-49b7-8f50-ac020fdbc342-logs" (OuterVolumeSpecName: "logs") pod "2142d4ca-115a-49b7-8f50-ac020fdbc342" (UID: "2142d4ca-115a-49b7-8f50-ac020fdbc342"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.841645 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-75pr7\" (UniqueName: \"kubernetes.io/projected/2142d4ca-115a-49b7-8f50-ac020fdbc342-kube-api-access-75pr7\") pod \"2142d4ca-115a-49b7-8f50-ac020fdbc342\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.841669 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-dev\") pod \"40930074-48c4-404d-a55c-bb8a4f581f56\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842141 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40930074-48c4-404d-a55c-bb8a4f581f56-scripts\") pod \"40930074-48c4-404d-a55c-bb8a4f581f56\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842170 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"40930074-48c4-404d-a55c-bb8a4f581f56\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842189 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-lib-modules\") pod \"40930074-48c4-404d-a55c-bb8a4f581f56\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842216 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") pod \"2142d4ca-115a-49b7-8f50-ac020fdbc342\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842234 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnnkf\" (UniqueName: \"kubernetes.io/projected/40930074-48c4-404d-a55c-bb8a4f581f56-kube-api-access-rnnkf\") pod \"40930074-48c4-404d-a55c-bb8a4f581f56\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842248 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-run\") pod \"2142d4ca-115a-49b7-8f50-ac020fdbc342\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842266 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-etc-nvme\") pod \"40930074-48c4-404d-a55c-bb8a4f581f56\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842293 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40930074-48c4-404d-a55c-bb8a4f581f56-config-data\") pod \"40930074-48c4-404d-a55c-bb8a4f581f56\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842321 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2142d4ca-115a-49b7-8f50-ac020fdbc342-config-data\") pod \"2142d4ca-115a-49b7-8f50-ac020fdbc342\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842341 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-var-locks-brick\") pod \"2142d4ca-115a-49b7-8f50-ac020fdbc342\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842395 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40930074-48c4-404d-a55c-bb8a4f581f56-httpd-run\") pod \"40930074-48c4-404d-a55c-bb8a4f581f56\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842419 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"40930074-48c4-404d-a55c-bb8a4f581f56\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842443 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2142d4ca-115a-49b7-8f50-ac020fdbc342-scripts\") pod \"2142d4ca-115a-49b7-8f50-ac020fdbc342\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842465 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-lib-modules\") pod \"2142d4ca-115a-49b7-8f50-ac020fdbc342\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842482 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-sys\") pod \"2142d4ca-115a-49b7-8f50-ac020fdbc342\" (UID: \"2142d4ca-115a-49b7-8f50-ac020fdbc342\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842535 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-sys\") pod \"40930074-48c4-404d-a55c-bb8a4f581f56\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.842561 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-run\") pod \"40930074-48c4-404d-a55c-bb8a4f581f56\" (UID: \"40930074-48c4-404d-a55c-bb8a4f581f56\") " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.843036 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.843054 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/40930074-48c4-404d-a55c-bb8a4f581f56-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.843677 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.843692 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.843702 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2142d4ca-115a-49b7-8f50-ac020fdbc342-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.843711 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.843719 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.843728 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2142d4ca-115a-49b7-8f50-ac020fdbc342-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.843758 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-run" (OuterVolumeSpecName: "run") pod "40930074-48c4-404d-a55c-bb8a4f581f56" (UID: "40930074-48c4-404d-a55c-bb8a4f581f56"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.843790 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-dev" (OuterVolumeSpecName: "dev") pod "40930074-48c4-404d-a55c-bb8a4f581f56" (UID: "40930074-48c4-404d-a55c-bb8a4f581f56"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.845762 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "40930074-48c4-404d-a55c-bb8a4f581f56" (UID: "40930074-48c4-404d-a55c-bb8a4f581f56"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.845795 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-sys" (OuterVolumeSpecName: "sys") pod "2142d4ca-115a-49b7-8f50-ac020fdbc342" (UID: "2142d4ca-115a-49b7-8f50-ac020fdbc342"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.845846 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2142d4ca-115a-49b7-8f50-ac020fdbc342" (UID: "2142d4ca-115a-49b7-8f50-ac020fdbc342"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.845870 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "40930074-48c4-404d-a55c-bb8a4f581f56" (UID: "40930074-48c4-404d-a55c-bb8a4f581f56"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.845887 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-run" (OuterVolumeSpecName: "run") pod "2142d4ca-115a-49b7-8f50-ac020fdbc342" (UID: "2142d4ca-115a-49b7-8f50-ac020fdbc342"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.846213 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "2142d4ca-115a-49b7-8f50-ac020fdbc342" (UID: "2142d4ca-115a-49b7-8f50-ac020fdbc342"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.846523 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/40930074-48c4-404d-a55c-bb8a4f581f56-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "40930074-48c4-404d-a55c-bb8a4f581f56" (UID: "40930074-48c4-404d-a55c-bb8a4f581f56"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.846570 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-sys" (OuterVolumeSpecName: "sys") pod "40930074-48c4-404d-a55c-bb8a4f581f56" (UID: "40930074-48c4-404d-a55c-bb8a4f581f56"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.849825 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/40930074-48c4-404d-a55c-bb8a4f581f56-kube-api-access-rnnkf" (OuterVolumeSpecName: "kube-api-access-rnnkf") pod "40930074-48c4-404d-a55c-bb8a4f581f56" (UID: "40930074-48c4-404d-a55c-bb8a4f581f56"). InnerVolumeSpecName "kube-api-access-rnnkf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.851637 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance-cache") pod "40930074-48c4-404d-a55c-bb8a4f581f56" (UID: "40930074-48c4-404d-a55c-bb8a4f581f56"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.854365 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage04-crc" (OuterVolumeSpecName: "glance") pod "2142d4ca-115a-49b7-8f50-ac020fdbc342" (UID: "2142d4ca-115a-49b7-8f50-ac020fdbc342"). InnerVolumeSpecName "local-storage04-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.854497 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "40930074-48c4-404d-a55c-bb8a4f581f56" (UID: "40930074-48c4-404d-a55c-bb8a4f581f56"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.855364 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2142d4ca-115a-49b7-8f50-ac020fdbc342-kube-api-access-75pr7" (OuterVolumeSpecName: "kube-api-access-75pr7") pod "2142d4ca-115a-49b7-8f50-ac020fdbc342" (UID: "2142d4ca-115a-49b7-8f50-ac020fdbc342"). InnerVolumeSpecName "kube-api-access-75pr7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.858609 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40930074-48c4-404d-a55c-bb8a4f581f56-scripts" (OuterVolumeSpecName: "scripts") pod "40930074-48c4-404d-a55c-bb8a4f581f56" (UID: "40930074-48c4-404d-a55c-bb8a4f581f56"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.860742 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage03-crc" (OuterVolumeSpecName: "glance-cache") pod "2142d4ca-115a-49b7-8f50-ac020fdbc342" (UID: "2142d4ca-115a-49b7-8f50-ac020fdbc342"). InnerVolumeSpecName "local-storage03-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.862306 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2142d4ca-115a-49b7-8f50-ac020fdbc342-scripts" (OuterVolumeSpecName: "scripts") pod "2142d4ca-115a-49b7-8f50-ac020fdbc342" (UID: "2142d4ca-115a-49b7-8f50-ac020fdbc342"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.885408 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2142d4ca-115a-49b7-8f50-ac020fdbc342-config-data" (OuterVolumeSpecName: "config-data") pod "2142d4ca-115a-49b7-8f50-ac020fdbc342" (UID: "2142d4ca-115a-49b7-8f50-ac020fdbc342"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.890435 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/40930074-48c4-404d-a55c-bb8a4f581f56-config-data" (OuterVolumeSpecName: "config-data") pod "40930074-48c4-404d-a55c-bb8a4f581f56" (UID: "40930074-48c4-404d-a55c-bb8a4f581f56"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944637 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944669 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/40930074-48c4-404d-a55c-bb8a4f581f56-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944701 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944709 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2142d4ca-115a-49b7-8f50-ac020fdbc342-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944719 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944727 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944735 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944743 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944754 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944763 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-75pr7\" (UniqueName: \"kubernetes.io/projected/2142d4ca-115a-49b7-8f50-ac020fdbc342-kube-api-access-75pr7\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944772 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944781 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/40930074-48c4-404d-a55c-bb8a4f581f56-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944792 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944801 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944812 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" " Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944820 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnnkf\" (UniqueName: \"kubernetes.io/projected/40930074-48c4-404d-a55c-bb8a4f581f56-kube-api-access-rnnkf\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944829 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2142d4ca-115a-49b7-8f50-ac020fdbc342-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944837 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/40930074-48c4-404d-a55c-bb8a4f581f56-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944844 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/40930074-48c4-404d-a55c-bb8a4f581f56-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.944851 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2142d4ca-115a-49b7-8f50-ac020fdbc342-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.960575 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage03-crc" (UniqueName: "kubernetes.io/local-volume/local-storage03-crc") on node "crc" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.960705 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.962106 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage04-crc" (UniqueName: "kubernetes.io/local-volume/local-storage04-crc") on node "crc" Jan 31 15:03:27 crc kubenswrapper[4751]: I0131 15:03:27.964163 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.047459 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage04-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage04-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.047500 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.047538 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage03-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage03-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.047550 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.215352 4751 generic.go:334] "Generic (PLEG): container finished" podID="40930074-48c4-404d-a55c-bb8a4f581f56" containerID="16e5e639ab2ee9cd835695e29e8846f4088bef1a2b7a0d28e0303930d813839c" exitCode=0 Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.215423 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"40930074-48c4-404d-a55c-bb8a4f581f56","Type":"ContainerDied","Data":"16e5e639ab2ee9cd835695e29e8846f4088bef1a2b7a0d28e0303930d813839c"} Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.215454 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-2" event={"ID":"40930074-48c4-404d-a55c-bb8a4f581f56","Type":"ContainerDied","Data":"95584e8f76d354aa1bb1539546392e64aa9a3d998c839b11c54c3e4e4b46195b"} Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.215473 4751 scope.go:117] "RemoveContainer" containerID="16e5e639ab2ee9cd835695e29e8846f4088bef1a2b7a0d28e0303930d813839c" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.215552 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-2" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.217239 4751 generic.go:334] "Generic (PLEG): container finished" podID="2142d4ca-115a-49b7-8f50-ac020fdbc342" containerID="dea90ac64b7d37ae389ad079e16e4c6e1e9d9d925e5e168a9ae5bb17022b8317" exitCode=0 Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.217312 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2142d4ca-115a-49b7-8f50-ac020fdbc342","Type":"ContainerDied","Data":"dea90ac64b7d37ae389ad079e16e4c6e1e9d9d925e5e168a9ae5bb17022b8317"} Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.217422 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2142d4ca-115a-49b7-8f50-ac020fdbc342","Type":"ContainerDied","Data":"5f1a0e7c6277e92312ce6862469a84b79e4f876e98e63b342abc9b0fa8fe5418"} Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.217336 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.238703 4751 scope.go:117] "RemoveContainer" containerID="566c7ba296d8a89d8b54dce02f2ecb937ecdc55998a198c89a4aa08ceab608d9" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.258306 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.269204 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.278609 4751 scope.go:117] "RemoveContainer" containerID="16e5e639ab2ee9cd835695e29e8846f4088bef1a2b7a0d28e0303930d813839c" Jan 31 15:03:28 crc kubenswrapper[4751]: E0131 15:03:28.279327 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"16e5e639ab2ee9cd835695e29e8846f4088bef1a2b7a0d28e0303930d813839c\": container with ID starting with 16e5e639ab2ee9cd835695e29e8846f4088bef1a2b7a0d28e0303930d813839c not found: ID does not exist" containerID="16e5e639ab2ee9cd835695e29e8846f4088bef1a2b7a0d28e0303930d813839c" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.279437 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"16e5e639ab2ee9cd835695e29e8846f4088bef1a2b7a0d28e0303930d813839c"} err="failed to get container status \"16e5e639ab2ee9cd835695e29e8846f4088bef1a2b7a0d28e0303930d813839c\": rpc error: code = NotFound desc = could not find container \"16e5e639ab2ee9cd835695e29e8846f4088bef1a2b7a0d28e0303930d813839c\": container with ID starting with 16e5e639ab2ee9cd835695e29e8846f4088bef1a2b7a0d28e0303930d813839c not found: ID does not exist" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.279474 4751 scope.go:117] "RemoveContainer" containerID="566c7ba296d8a89d8b54dce02f2ecb937ecdc55998a198c89a4aa08ceab608d9" Jan 31 15:03:28 crc kubenswrapper[4751]: E0131 15:03:28.279786 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"566c7ba296d8a89d8b54dce02f2ecb937ecdc55998a198c89a4aa08ceab608d9\": container with ID starting with 566c7ba296d8a89d8b54dce02f2ecb937ecdc55998a198c89a4aa08ceab608d9 not found: ID does not exist" containerID="566c7ba296d8a89d8b54dce02f2ecb937ecdc55998a198c89a4aa08ceab608d9" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.279867 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"566c7ba296d8a89d8b54dce02f2ecb937ecdc55998a198c89a4aa08ceab608d9"} err="failed to get container status \"566c7ba296d8a89d8b54dce02f2ecb937ecdc55998a198c89a4aa08ceab608d9\": rpc error: code = NotFound desc = could not find container \"566c7ba296d8a89d8b54dce02f2ecb937ecdc55998a198c89a4aa08ceab608d9\": container with ID starting with 566c7ba296d8a89d8b54dce02f2ecb937ecdc55998a198c89a4aa08ceab608d9 not found: ID does not exist" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.279900 4751 scope.go:117] "RemoveContainer" containerID="dea90ac64b7d37ae389ad079e16e4c6e1e9d9d925e5e168a9ae5bb17022b8317" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.301465 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.304604 4751 scope.go:117] "RemoveContainer" containerID="9cf14d4a3826deef636e4c06d5c8aee7e5d6320538dd47d4daa7da644059233a" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.311930 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-2"] Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.323908 4751 scope.go:117] "RemoveContainer" containerID="dea90ac64b7d37ae389ad079e16e4c6e1e9d9d925e5e168a9ae5bb17022b8317" Jan 31 15:03:28 crc kubenswrapper[4751]: E0131 15:03:28.324472 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dea90ac64b7d37ae389ad079e16e4c6e1e9d9d925e5e168a9ae5bb17022b8317\": container with ID starting with dea90ac64b7d37ae389ad079e16e4c6e1e9d9d925e5e168a9ae5bb17022b8317 not found: ID does not exist" containerID="dea90ac64b7d37ae389ad079e16e4c6e1e9d9d925e5e168a9ae5bb17022b8317" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.324524 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dea90ac64b7d37ae389ad079e16e4c6e1e9d9d925e5e168a9ae5bb17022b8317"} err="failed to get container status \"dea90ac64b7d37ae389ad079e16e4c6e1e9d9d925e5e168a9ae5bb17022b8317\": rpc error: code = NotFound desc = could not find container \"dea90ac64b7d37ae389ad079e16e4c6e1e9d9d925e5e168a9ae5bb17022b8317\": container with ID starting with dea90ac64b7d37ae389ad079e16e4c6e1e9d9d925e5e168a9ae5bb17022b8317 not found: ID does not exist" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.324558 4751 scope.go:117] "RemoveContainer" containerID="9cf14d4a3826deef636e4c06d5c8aee7e5d6320538dd47d4daa7da644059233a" Jan 31 15:03:28 crc kubenswrapper[4751]: E0131 15:03:28.324926 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9cf14d4a3826deef636e4c06d5c8aee7e5d6320538dd47d4daa7da644059233a\": container with ID starting with 9cf14d4a3826deef636e4c06d5c8aee7e5d6320538dd47d4daa7da644059233a not found: ID does not exist" containerID="9cf14d4a3826deef636e4c06d5c8aee7e5d6320538dd47d4daa7da644059233a" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.324946 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9cf14d4a3826deef636e4c06d5c8aee7e5d6320538dd47d4daa7da644059233a"} err="failed to get container status \"9cf14d4a3826deef636e4c06d5c8aee7e5d6320538dd47d4daa7da644059233a\": rpc error: code = NotFound desc = could not find container \"9cf14d4a3826deef636e4c06d5c8aee7e5d6320538dd47d4daa7da644059233a\": container with ID starting with 9cf14d4a3826deef636e4c06d5c8aee7e5d6320538dd47d4daa7da644059233a not found: ID does not exist" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.414870 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2142d4ca-115a-49b7-8f50-ac020fdbc342" path="/var/lib/kubelet/pods/2142d4ca-115a-49b7-8f50-ac020fdbc342/volumes" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.415938 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="40930074-48c4-404d-a55c-bb8a4f581f56" path="/var/lib/kubelet/pods/40930074-48c4-404d-a55c-bb8a4f581f56/volumes" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.743040 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.750170 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.857792 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-sys\") pod \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.857962 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-sys" (OuterVolumeSpecName: "sys") pod "f3d6d7db-fc12-479e-aedf-8ef829bf01e5" (UID: "f3d6d7db-fc12-479e-aedf-8ef829bf01e5"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.858125 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-dev" (OuterVolumeSpecName: "dev") pod "63d398be-aa92-4a00-933b-549a0c4e4ad7" (UID: "63d398be-aa92-4a00-933b-549a0c4e4ad7"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.858059 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-dev\") pod \"63d398be-aa92-4a00-933b-549a0c4e4ad7\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860307 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-lib-modules\") pod \"63d398be-aa92-4a00-933b-549a0c4e4ad7\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860363 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-config-data\") pod \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860382 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-run\") pod \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860415 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "63d398be-aa92-4a00-933b-549a0c4e4ad7" (UID: "63d398be-aa92-4a00-933b-549a0c4e4ad7"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860437 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-sys\") pod \"63d398be-aa92-4a00-933b-549a0c4e4ad7\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860465 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") pod \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860504 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-var-locks-brick\") pod \"63d398be-aa92-4a00-933b-549a0c4e4ad7\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860528 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-logs\") pod \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860552 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63d398be-aa92-4a00-933b-549a0c4e4ad7-logs\") pod \"63d398be-aa92-4a00-933b-549a0c4e4ad7\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860598 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-run\") pod \"63d398be-aa92-4a00-933b-549a0c4e4ad7\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860616 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-lib-modules\") pod \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860656 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d398be-aa92-4a00-933b-549a0c4e4ad7-config-data\") pod \"63d398be-aa92-4a00-933b-549a0c4e4ad7\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860680 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-scripts\") pod \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860699 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-etc-nvme\") pod \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860755 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63d398be-aa92-4a00-933b-549a0c4e4ad7-scripts\") pod \"63d398be-aa92-4a00-933b-549a0c4e4ad7\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860787 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j7jg\" (UniqueName: \"kubernetes.io/projected/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-kube-api-access-7j7jg\") pod \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860843 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-etc-nvme\") pod \"63d398be-aa92-4a00-933b-549a0c4e4ad7\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860870 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-httpd-run\") pod \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860920 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") pod \"63d398be-aa92-4a00-933b-549a0c4e4ad7\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860940 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-etc-iscsi\") pod \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.860988 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-etc-iscsi\") pod \"63d398be-aa92-4a00-933b-549a0c4e4ad7\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861021 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"63d398be-aa92-4a00-933b-549a0c4e4ad7\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861115 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63d398be-aa92-4a00-933b-549a0c4e4ad7-httpd-run\") pod \"63d398be-aa92-4a00-933b-549a0c4e4ad7\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861133 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-run" (OuterVolumeSpecName: "run") pod "f3d6d7db-fc12-479e-aedf-8ef829bf01e5" (UID: "f3d6d7db-fc12-479e-aedf-8ef829bf01e5"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861140 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-dev\") pod \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861169 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") pod \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861173 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63d398be-aa92-4a00-933b-549a0c4e4ad7-logs" (OuterVolumeSpecName: "logs") pod "63d398be-aa92-4a00-933b-549a0c4e4ad7" (UID: "63d398be-aa92-4a00-933b-549a0c4e4ad7"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861198 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pdj2p\" (UniqueName: \"kubernetes.io/projected/63d398be-aa92-4a00-933b-549a0c4e4ad7-kube-api-access-pdj2p\") pod \"63d398be-aa92-4a00-933b-549a0c4e4ad7\" (UID: \"63d398be-aa92-4a00-933b-549a0c4e4ad7\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861206 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-run" (OuterVolumeSpecName: "run") pod "63d398be-aa92-4a00-933b-549a0c4e4ad7" (UID: "63d398be-aa92-4a00-933b-549a0c4e4ad7"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861216 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-var-locks-brick\") pod \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\" (UID: \"f3d6d7db-fc12-479e-aedf-8ef829bf01e5\") " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861187 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-dev" (OuterVolumeSpecName: "dev") pod "f3d6d7db-fc12-479e-aedf-8ef829bf01e5" (UID: "f3d6d7db-fc12-479e-aedf-8ef829bf01e5"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861252 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "63d398be-aa92-4a00-933b-549a0c4e4ad7" (UID: "63d398be-aa92-4a00-933b-549a0c4e4ad7"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861232 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "f3d6d7db-fc12-479e-aedf-8ef829bf01e5" (UID: "f3d6d7db-fc12-479e-aedf-8ef829bf01e5"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861466 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-logs" (OuterVolumeSpecName: "logs") pod "f3d6d7db-fc12-479e-aedf-8ef829bf01e5" (UID: "f3d6d7db-fc12-479e-aedf-8ef829bf01e5"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861490 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "f3d6d7db-fc12-479e-aedf-8ef829bf01e5" (UID: "f3d6d7db-fc12-479e-aedf-8ef829bf01e5"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861645 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861658 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861667 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861675 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861703 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861711 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861720 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861729 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861738 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861747 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.861756 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/63d398be-aa92-4a00-933b-549a0c4e4ad7-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.862242 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-sys" (OuterVolumeSpecName: "sys") pod "63d398be-aa92-4a00-933b-549a0c4e4ad7" (UID: "63d398be-aa92-4a00-933b-549a0c4e4ad7"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.862347 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "f3d6d7db-fc12-479e-aedf-8ef829bf01e5" (UID: "f3d6d7db-fc12-479e-aedf-8ef829bf01e5"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.862379 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "f3d6d7db-fc12-479e-aedf-8ef829bf01e5" (UID: "f3d6d7db-fc12-479e-aedf-8ef829bf01e5"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.862761 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "63d398be-aa92-4a00-933b-549a0c4e4ad7" (UID: "63d398be-aa92-4a00-933b-549a0c4e4ad7"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.862858 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "63d398be-aa92-4a00-933b-549a0c4e4ad7" (UID: "63d398be-aa92-4a00-933b-549a0c4e4ad7"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.862892 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "f3d6d7db-fc12-479e-aedf-8ef829bf01e5" (UID: "f3d6d7db-fc12-479e-aedf-8ef829bf01e5"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.863360 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/63d398be-aa92-4a00-933b-549a0c4e4ad7-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "63d398be-aa92-4a00-933b-549a0c4e4ad7" (UID: "63d398be-aa92-4a00-933b-549a0c4e4ad7"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.863805 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage07-crc" (OuterVolumeSpecName: "glance-cache") pod "f3d6d7db-fc12-479e-aedf-8ef829bf01e5" (UID: "f3d6d7db-fc12-479e-aedf-8ef829bf01e5"). InnerVolumeSpecName "local-storage07-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.864843 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-scripts" (OuterVolumeSpecName: "scripts") pod "f3d6d7db-fc12-479e-aedf-8ef829bf01e5" (UID: "f3d6d7db-fc12-479e-aedf-8ef829bf01e5"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.865610 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-kube-api-access-7j7jg" (OuterVolumeSpecName: "kube-api-access-7j7jg") pod "f3d6d7db-fc12-479e-aedf-8ef829bf01e5" (UID: "f3d6d7db-fc12-479e-aedf-8ef829bf01e5"). InnerVolumeSpecName "kube-api-access-7j7jg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.866046 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63d398be-aa92-4a00-933b-549a0c4e4ad7-scripts" (OuterVolumeSpecName: "scripts") pod "63d398be-aa92-4a00-933b-549a0c4e4ad7" (UID: "63d398be-aa92-4a00-933b-549a0c4e4ad7"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.866512 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage09-crc" (OuterVolumeSpecName: "glance") pod "f3d6d7db-fc12-479e-aedf-8ef829bf01e5" (UID: "f3d6d7db-fc12-479e-aedf-8ef829bf01e5"). InnerVolumeSpecName "local-storage09-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.867392 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/63d398be-aa92-4a00-933b-549a0c4e4ad7-kube-api-access-pdj2p" (OuterVolumeSpecName: "kube-api-access-pdj2p") pod "63d398be-aa92-4a00-933b-549a0c4e4ad7" (UID: "63d398be-aa92-4a00-933b-549a0c4e4ad7"). InnerVolumeSpecName "kube-api-access-pdj2p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.868211 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage20-crc" (OuterVolumeSpecName: "glance-cache") pod "63d398be-aa92-4a00-933b-549a0c4e4ad7" (UID: "63d398be-aa92-4a00-933b-549a0c4e4ad7"). InnerVolumeSpecName "local-storage20-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.869200 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage19-crc" (OuterVolumeSpecName: "glance") pod "63d398be-aa92-4a00-933b-549a0c4e4ad7" (UID: "63d398be-aa92-4a00-933b-549a0c4e4ad7"). InnerVolumeSpecName "local-storage19-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.904231 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-config-data" (OuterVolumeSpecName: "config-data") pod "f3d6d7db-fc12-479e-aedf-8ef829bf01e5" (UID: "f3d6d7db-fc12-479e-aedf-8ef829bf01e5"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.911338 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/63d398be-aa92-4a00-933b-549a0c4e4ad7-config-data" (OuterVolumeSpecName: "config-data") pod "63d398be-aa92-4a00-933b-549a0c4e4ad7" (UID: "63d398be-aa92-4a00-933b-549a0c4e4ad7"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963104 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pdj2p\" (UniqueName: \"kubernetes.io/projected/63d398be-aa92-4a00-933b-549a0c4e4ad7-kube-api-access-pdj2p\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963147 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963160 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963195 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963204 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/63d398be-aa92-4a00-933b-549a0c4e4ad7-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963213 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963221 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963232 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/63d398be-aa92-4a00-933b-549a0c4e4ad7-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963240 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7j7jg\" (UniqueName: \"kubernetes.io/projected/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-kube-api-access-7j7jg\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963249 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963257 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963271 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") on node \"crc\" " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963279 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/f3d6d7db-fc12-479e-aedf-8ef829bf01e5-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963287 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/63d398be-aa92-4a00-933b-549a0c4e4ad7-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963300 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") on node \"crc\" " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963309 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/63d398be-aa92-4a00-933b-549a0c4e4ad7-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.963322 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" " Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.976274 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage09-crc" (UniqueName: "kubernetes.io/local-volume/local-storage09-crc") on node "crc" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.977228 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage19-crc" (UniqueName: "kubernetes.io/local-volume/local-storage19-crc") on node "crc" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.977326 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage20-crc" (UniqueName: "kubernetes.io/local-volume/local-storage20-crc") on node "crc" Jan 31 15:03:28 crc kubenswrapper[4751]: I0131 15:03:28.977497 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage07-crc" (UniqueName: "kubernetes.io/local-volume/local-storage07-crc") on node "crc" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.064741 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage09-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage09-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.064770 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage07-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage07-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.064779 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage19-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage19-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.064788 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.227543 4751 generic.go:334] "Generic (PLEG): container finished" podID="63d398be-aa92-4a00-933b-549a0c4e4ad7" containerID="9c261553a8935996c28e0989101c6aa4dfa134fe2060a0f6d58b3c2531dcdda7" exitCode=0 Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.227604 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.227643 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"63d398be-aa92-4a00-933b-549a0c4e4ad7","Type":"ContainerDied","Data":"9c261553a8935996c28e0989101c6aa4dfa134fe2060a0f6d58b3c2531dcdda7"} Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.227725 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"63d398be-aa92-4a00-933b-549a0c4e4ad7","Type":"ContainerDied","Data":"af5a8366873c62793cd928a263608e14d01ee8087ae27093c452690e6adc2f31"} Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.227774 4751 scope.go:117] "RemoveContainer" containerID="9c261553a8935996c28e0989101c6aa4dfa134fe2060a0f6d58b3c2531dcdda7" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.233127 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-2" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.233161 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"f3d6d7db-fc12-479e-aedf-8ef829bf01e5","Type":"ContainerDied","Data":"1dc237ef9e49d030bd026bc0cf53f0f8764a60860c14efa8846ee1aaa6f4fde8"} Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.233060 4751 generic.go:334] "Generic (PLEG): container finished" podID="f3d6d7db-fc12-479e-aedf-8ef829bf01e5" containerID="1dc237ef9e49d030bd026bc0cf53f0f8764a60860c14efa8846ee1aaa6f4fde8" exitCode=0 Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.233347 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-2" event={"ID":"f3d6d7db-fc12-479e-aedf-8ef829bf01e5","Type":"ContainerDied","Data":"cc8b1dd3b31a488ef9b0862ddc7dae65875f8b41c155c15581f698c10d6ef4dd"} Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.251696 4751 scope.go:117] "RemoveContainer" containerID="136f5cdff8be1979c81bded97141738e6801ed16ea2d04c80f29ec3512279160" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.271883 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.275408 4751 scope.go:117] "RemoveContainer" containerID="9c261553a8935996c28e0989101c6aa4dfa134fe2060a0f6d58b3c2531dcdda7" Jan 31 15:03:29 crc kubenswrapper[4751]: E0131 15:03:29.275932 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9c261553a8935996c28e0989101c6aa4dfa134fe2060a0f6d58b3c2531dcdda7\": container with ID starting with 9c261553a8935996c28e0989101c6aa4dfa134fe2060a0f6d58b3c2531dcdda7 not found: ID does not exist" containerID="9c261553a8935996c28e0989101c6aa4dfa134fe2060a0f6d58b3c2531dcdda7" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.275982 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9c261553a8935996c28e0989101c6aa4dfa134fe2060a0f6d58b3c2531dcdda7"} err="failed to get container status \"9c261553a8935996c28e0989101c6aa4dfa134fe2060a0f6d58b3c2531dcdda7\": rpc error: code = NotFound desc = could not find container \"9c261553a8935996c28e0989101c6aa4dfa134fe2060a0f6d58b3c2531dcdda7\": container with ID starting with 9c261553a8935996c28e0989101c6aa4dfa134fe2060a0f6d58b3c2531dcdda7 not found: ID does not exist" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.276019 4751 scope.go:117] "RemoveContainer" containerID="136f5cdff8be1979c81bded97141738e6801ed16ea2d04c80f29ec3512279160" Jan 31 15:03:29 crc kubenswrapper[4751]: E0131 15:03:29.276357 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"136f5cdff8be1979c81bded97141738e6801ed16ea2d04c80f29ec3512279160\": container with ID starting with 136f5cdff8be1979c81bded97141738e6801ed16ea2d04c80f29ec3512279160 not found: ID does not exist" containerID="136f5cdff8be1979c81bded97141738e6801ed16ea2d04c80f29ec3512279160" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.276397 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"136f5cdff8be1979c81bded97141738e6801ed16ea2d04c80f29ec3512279160"} err="failed to get container status \"136f5cdff8be1979c81bded97141738e6801ed16ea2d04c80f29ec3512279160\": rpc error: code = NotFound desc = could not find container \"136f5cdff8be1979c81bded97141738e6801ed16ea2d04c80f29ec3512279160\": container with ID starting with 136f5cdff8be1979c81bded97141738e6801ed16ea2d04c80f29ec3512279160 not found: ID does not exist" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.276420 4751 scope.go:117] "RemoveContainer" containerID="1dc237ef9e49d030bd026bc0cf53f0f8764a60860c14efa8846ee1aaa6f4fde8" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.282623 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.291928 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.295453 4751 scope.go:117] "RemoveContainer" containerID="9a94ed6a4818186063a21a203b752d947f2a19e75f8bff027ed517efba40515f" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.298103 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-2"] Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.312848 4751 scope.go:117] "RemoveContainer" containerID="1dc237ef9e49d030bd026bc0cf53f0f8764a60860c14efa8846ee1aaa6f4fde8" Jan 31 15:03:29 crc kubenswrapper[4751]: E0131 15:03:29.313277 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dc237ef9e49d030bd026bc0cf53f0f8764a60860c14efa8846ee1aaa6f4fde8\": container with ID starting with 1dc237ef9e49d030bd026bc0cf53f0f8764a60860c14efa8846ee1aaa6f4fde8 not found: ID does not exist" containerID="1dc237ef9e49d030bd026bc0cf53f0f8764a60860c14efa8846ee1aaa6f4fde8" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.313316 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dc237ef9e49d030bd026bc0cf53f0f8764a60860c14efa8846ee1aaa6f4fde8"} err="failed to get container status \"1dc237ef9e49d030bd026bc0cf53f0f8764a60860c14efa8846ee1aaa6f4fde8\": rpc error: code = NotFound desc = could not find container \"1dc237ef9e49d030bd026bc0cf53f0f8764a60860c14efa8846ee1aaa6f4fde8\": container with ID starting with 1dc237ef9e49d030bd026bc0cf53f0f8764a60860c14efa8846ee1aaa6f4fde8 not found: ID does not exist" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.313345 4751 scope.go:117] "RemoveContainer" containerID="9a94ed6a4818186063a21a203b752d947f2a19e75f8bff027ed517efba40515f" Jan 31 15:03:29 crc kubenswrapper[4751]: E0131 15:03:29.313721 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9a94ed6a4818186063a21a203b752d947f2a19e75f8bff027ed517efba40515f\": container with ID starting with 9a94ed6a4818186063a21a203b752d947f2a19e75f8bff027ed517efba40515f not found: ID does not exist" containerID="9a94ed6a4818186063a21a203b752d947f2a19e75f8bff027ed517efba40515f" Jan 31 15:03:29 crc kubenswrapper[4751]: I0131 15:03:29.313751 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9a94ed6a4818186063a21a203b752d947f2a19e75f8bff027ed517efba40515f"} err="failed to get container status \"9a94ed6a4818186063a21a203b752d947f2a19e75f8bff027ed517efba40515f\": rpc error: code = NotFound desc = could not find container \"9a94ed6a4818186063a21a203b752d947f2a19e75f8bff027ed517efba40515f\": container with ID starting with 9a94ed6a4818186063a21a203b752d947f2a19e75f8bff027ed517efba40515f not found: ID does not exist" Jan 31 15:03:30 crc kubenswrapper[4751]: I0131 15:03:30.114110 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:03:30 crc kubenswrapper[4751]: I0131 15:03:30.114452 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="a5d5c53d-eea5-4866-983f-8477eb16177b" containerName="glance-log" containerID="cri-o://1130e67347fe51af0e3a73480eea09807ed13c49b680ae612acbf9d7812bf42d" gracePeriod=30 Jan 31 15:03:30 crc kubenswrapper[4751]: I0131 15:03:30.114625 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="a5d5c53d-eea5-4866-983f-8477eb16177b" containerName="glance-httpd" containerID="cri-o://46e6d1ac28c6a7caa669a8786fb2699fde254ad44262a93b0f5099c64c3baee9" gracePeriod=30 Jan 31 15:03:30 crc kubenswrapper[4751]: I0131 15:03:30.252942 4751 generic.go:334] "Generic (PLEG): container finished" podID="a5d5c53d-eea5-4866-983f-8477eb16177b" containerID="1130e67347fe51af0e3a73480eea09807ed13c49b680ae612acbf9d7812bf42d" exitCode=143 Jan 31 15:03:30 crc kubenswrapper[4751]: I0131 15:03:30.253117 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"a5d5c53d-eea5-4866-983f-8477eb16177b","Type":"ContainerDied","Data":"1130e67347fe51af0e3a73480eea09807ed13c49b680ae612acbf9d7812bf42d"} Jan 31 15:03:30 crc kubenswrapper[4751]: I0131 15:03:30.416724 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="63d398be-aa92-4a00-933b-549a0c4e4ad7" path="/var/lib/kubelet/pods/63d398be-aa92-4a00-933b-549a0c4e4ad7/volumes" Jan 31 15:03:30 crc kubenswrapper[4751]: I0131 15:03:30.417567 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f3d6d7db-fc12-479e-aedf-8ef829bf01e5" path="/var/lib/kubelet/pods/f3d6d7db-fc12-479e-aedf-8ef829bf01e5/volumes" Jan 31 15:03:30 crc kubenswrapper[4751]: I0131 15:03:30.854707 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:03:30 crc kubenswrapper[4751]: I0131 15:03:30.855163 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="1c3cde72-72a2-4a51-a061-06397061de3c" containerName="glance-log" containerID="cri-o://5f91babe44ec234dd474a96b96ebc6224d6c3a4f5e9aa9658f789a37025d6480" gracePeriod=30 Jan 31 15:03:30 crc kubenswrapper[4751]: I0131 15:03:30.855606 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="1c3cde72-72a2-4a51-a061-06397061de3c" containerName="glance-httpd" containerID="cri-o://6735f01e0fcf3a4be1978348937bbe37355d165dbd9469a97f61438e89b09635" gracePeriod=30 Jan 31 15:03:31 crc kubenswrapper[4751]: I0131 15:03:31.265119 4751 generic.go:334] "Generic (PLEG): container finished" podID="1c3cde72-72a2-4a51-a061-06397061de3c" containerID="5f91babe44ec234dd474a96b96ebc6224d6c3a4f5e9aa9658f789a37025d6480" exitCode=143 Jan 31 15:03:31 crc kubenswrapper[4751]: I0131 15:03:31.265168 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"1c3cde72-72a2-4a51-a061-06397061de3c","Type":"ContainerDied","Data":"5f91babe44ec234dd474a96b96ebc6224d6c3a4f5e9aa9658f789a37025d6480"} Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.652038 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732092 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d5c53d-eea5-4866-983f-8477eb16177b-config-data\") pod \"a5d5c53d-eea5-4866-983f-8477eb16177b\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732151 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-lib-modules\") pod \"a5d5c53d-eea5-4866-983f-8477eb16177b\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732218 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-llcmb\" (UniqueName: \"kubernetes.io/projected/a5d5c53d-eea5-4866-983f-8477eb16177b-kube-api-access-llcmb\") pod \"a5d5c53d-eea5-4866-983f-8477eb16177b\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732237 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-etc-iscsi\") pod \"a5d5c53d-eea5-4866-983f-8477eb16177b\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732264 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5d5c53d-eea5-4866-983f-8477eb16177b-scripts\") pod \"a5d5c53d-eea5-4866-983f-8477eb16177b\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732317 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"a5d5c53d-eea5-4866-983f-8477eb16177b\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732303 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "a5d5c53d-eea5-4866-983f-8477eb16177b" (UID: "a5d5c53d-eea5-4866-983f-8477eb16177b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732339 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5d5c53d-eea5-4866-983f-8477eb16177b-httpd-run\") pod \"a5d5c53d-eea5-4866-983f-8477eb16177b\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732351 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "a5d5c53d-eea5-4866-983f-8477eb16177b" (UID: "a5d5c53d-eea5-4866-983f-8477eb16177b"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732385 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-dev" (OuterVolumeSpecName: "dev") pod "a5d5c53d-eea5-4866-983f-8477eb16177b" (UID: "a5d5c53d-eea5-4866-983f-8477eb16177b"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732360 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-dev\") pod \"a5d5c53d-eea5-4866-983f-8477eb16177b\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732430 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"a5d5c53d-eea5-4866-983f-8477eb16177b\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732480 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-var-locks-brick\") pod \"a5d5c53d-eea5-4866-983f-8477eb16177b\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732503 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-sys\") pod \"a5d5c53d-eea5-4866-983f-8477eb16177b\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732564 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-run\") pod \"a5d5c53d-eea5-4866-983f-8477eb16177b\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732598 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5d5c53d-eea5-4866-983f-8477eb16177b-logs\") pod \"a5d5c53d-eea5-4866-983f-8477eb16177b\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732656 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-etc-nvme\") pod \"a5d5c53d-eea5-4866-983f-8477eb16177b\" (UID: \"a5d5c53d-eea5-4866-983f-8477eb16177b\") " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.732970 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d5c53d-eea5-4866-983f-8477eb16177b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "a5d5c53d-eea5-4866-983f-8477eb16177b" (UID: "a5d5c53d-eea5-4866-983f-8477eb16177b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.733046 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "a5d5c53d-eea5-4866-983f-8477eb16177b" (UID: "a5d5c53d-eea5-4866-983f-8477eb16177b"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.733185 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/a5d5c53d-eea5-4866-983f-8477eb16177b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.733207 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.733216 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.733224 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.733233 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.733257 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-sys" (OuterVolumeSpecName: "sys") pod "a5d5c53d-eea5-4866-983f-8477eb16177b" (UID: "a5d5c53d-eea5-4866-983f-8477eb16177b"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.733277 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "a5d5c53d-eea5-4866-983f-8477eb16177b" (UID: "a5d5c53d-eea5-4866-983f-8477eb16177b"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.733297 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-run" (OuterVolumeSpecName: "run") pod "a5d5c53d-eea5-4866-983f-8477eb16177b" (UID: "a5d5c53d-eea5-4866-983f-8477eb16177b"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.733491 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a5d5c53d-eea5-4866-983f-8477eb16177b-logs" (OuterVolumeSpecName: "logs") pod "a5d5c53d-eea5-4866-983f-8477eb16177b" (UID: "a5d5c53d-eea5-4866-983f-8477eb16177b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.737174 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "a5d5c53d-eea5-4866-983f-8477eb16177b" (UID: "a5d5c53d-eea5-4866-983f-8477eb16177b"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.737209 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d5c53d-eea5-4866-983f-8477eb16177b-scripts" (OuterVolumeSpecName: "scripts") pod "a5d5c53d-eea5-4866-983f-8477eb16177b" (UID: "a5d5c53d-eea5-4866-983f-8477eb16177b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.737667 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a5d5c53d-eea5-4866-983f-8477eb16177b-kube-api-access-llcmb" (OuterVolumeSpecName: "kube-api-access-llcmb") pod "a5d5c53d-eea5-4866-983f-8477eb16177b" (UID: "a5d5c53d-eea5-4866-983f-8477eb16177b"). InnerVolumeSpecName "kube-api-access-llcmb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.737706 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance-cache") pod "a5d5c53d-eea5-4866-983f-8477eb16177b" (UID: "a5d5c53d-eea5-4866-983f-8477eb16177b"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.778476 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a5d5c53d-eea5-4866-983f-8477eb16177b-config-data" (OuterVolumeSpecName: "config-data") pod "a5d5c53d-eea5-4866-983f-8477eb16177b" (UID: "a5d5c53d-eea5-4866-983f-8477eb16177b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.834768 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.834802 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.834818 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.834834 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.834846 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/a5d5c53d-eea5-4866-983f-8477eb16177b-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.834857 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/a5d5c53d-eea5-4866-983f-8477eb16177b-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.834864 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/a5d5c53d-eea5-4866-983f-8477eb16177b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.834872 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-llcmb\" (UniqueName: \"kubernetes.io/projected/a5d5c53d-eea5-4866-983f-8477eb16177b-kube-api-access-llcmb\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.834880 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/a5d5c53d-eea5-4866-983f-8477eb16177b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.850359 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.852319 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.935535 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:33 crc kubenswrapper[4751]: I0131 15:03:33.935568 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.277810 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.296682 4751 generic.go:334] "Generic (PLEG): container finished" podID="1c3cde72-72a2-4a51-a061-06397061de3c" containerID="6735f01e0fcf3a4be1978348937bbe37355d165dbd9469a97f61438e89b09635" exitCode=0 Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.296724 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"1c3cde72-72a2-4a51-a061-06397061de3c","Type":"ContainerDied","Data":"6735f01e0fcf3a4be1978348937bbe37355d165dbd9469a97f61438e89b09635"} Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.296771 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"1c3cde72-72a2-4a51-a061-06397061de3c","Type":"ContainerDied","Data":"c898b2da5714126a65ee741ceb3ed33e63dd1016489a4c5e916d0a834f254ea4"} Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.296741 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.296793 4751 scope.go:117] "RemoveContainer" containerID="6735f01e0fcf3a4be1978348937bbe37355d165dbd9469a97f61438e89b09635" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.310298 4751 generic.go:334] "Generic (PLEG): container finished" podID="a5d5c53d-eea5-4866-983f-8477eb16177b" containerID="46e6d1ac28c6a7caa669a8786fb2699fde254ad44262a93b0f5099c64c3baee9" exitCode=0 Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.310337 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"a5d5c53d-eea5-4866-983f-8477eb16177b","Type":"ContainerDied","Data":"46e6d1ac28c6a7caa669a8786fb2699fde254ad44262a93b0f5099c64c3baee9"} Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.310360 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"a5d5c53d-eea5-4866-983f-8477eb16177b","Type":"ContainerDied","Data":"f196d65739ae0a450d1f988eb2e240599d07c3bc3006a4085be84398709e7ee3"} Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.310389 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.334287 4751 scope.go:117] "RemoveContainer" containerID="5f91babe44ec234dd474a96b96ebc6224d6c3a4f5e9aa9658f789a37025d6480" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.342886 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"1c3cde72-72a2-4a51-a061-06397061de3c\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.343447 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-lib-modules\") pod \"1c3cde72-72a2-4a51-a061-06397061de3c\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.343550 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-var-locks-brick\") pod \"1c3cde72-72a2-4a51-a061-06397061de3c\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.343620 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-etc-nvme\") pod \"1c3cde72-72a2-4a51-a061-06397061de3c\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.343689 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c3cde72-72a2-4a51-a061-06397061de3c-logs\") pod \"1c3cde72-72a2-4a51-a061-06397061de3c\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.343807 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c3cde72-72a2-4a51-a061-06397061de3c-httpd-run\") pod \"1c3cde72-72a2-4a51-a061-06397061de3c\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.343884 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kxgsq\" (UniqueName: \"kubernetes.io/projected/1c3cde72-72a2-4a51-a061-06397061de3c-kube-api-access-kxgsq\") pod \"1c3cde72-72a2-4a51-a061-06397061de3c\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.343949 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-run\") pod \"1c3cde72-72a2-4a51-a061-06397061de3c\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.344177 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-sys\") pod \"1c3cde72-72a2-4a51-a061-06397061de3c\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.344262 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3cde72-72a2-4a51-a061-06397061de3c-config-data\") pod \"1c3cde72-72a2-4a51-a061-06397061de3c\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.344339 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-dev\") pod \"1c3cde72-72a2-4a51-a061-06397061de3c\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.344409 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-etc-iscsi\") pod \"1c3cde72-72a2-4a51-a061-06397061de3c\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.344485 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c3cde72-72a2-4a51-a061-06397061de3c-scripts\") pod \"1c3cde72-72a2-4a51-a061-06397061de3c\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.344559 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"1c3cde72-72a2-4a51-a061-06397061de3c\" (UID: \"1c3cde72-72a2-4a51-a061-06397061de3c\") " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.346436 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-run" (OuterVolumeSpecName: "run") pod "1c3cde72-72a2-4a51-a061-06397061de3c" (UID: "1c3cde72-72a2-4a51-a061-06397061de3c"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.346520 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-sys" (OuterVolumeSpecName: "sys") pod "1c3cde72-72a2-4a51-a061-06397061de3c" (UID: "1c3cde72-72a2-4a51-a061-06397061de3c"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.346532 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "1c3cde72-72a2-4a51-a061-06397061de3c" (UID: "1c3cde72-72a2-4a51-a061-06397061de3c"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.346751 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "1c3cde72-72a2-4a51-a061-06397061de3c" (UID: "1c3cde72-72a2-4a51-a061-06397061de3c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.346909 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "1c3cde72-72a2-4a51-a061-06397061de3c" (UID: "1c3cde72-72a2-4a51-a061-06397061de3c"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.347042 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "1c3cde72-72a2-4a51-a061-06397061de3c" (UID: "1c3cde72-72a2-4a51-a061-06397061de3c"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.347140 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c3cde72-72a2-4a51-a061-06397061de3c-logs" (OuterVolumeSpecName: "logs") pod "1c3cde72-72a2-4a51-a061-06397061de3c" (UID: "1c3cde72-72a2-4a51-a061-06397061de3c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.347154 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1c3cde72-72a2-4a51-a061-06397061de3c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "1c3cde72-72a2-4a51-a061-06397061de3c" (UID: "1c3cde72-72a2-4a51-a061-06397061de3c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.348276 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-dev" (OuterVolumeSpecName: "dev") pod "1c3cde72-72a2-4a51-a061-06397061de3c" (UID: "1c3cde72-72a2-4a51-a061-06397061de3c"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.349400 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.349786 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance-cache") pod "1c3cde72-72a2-4a51-a061-06397061de3c" (UID: "1c3cde72-72a2-4a51-a061-06397061de3c"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.357244 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage17-crc" (OuterVolumeSpecName: "glance") pod "1c3cde72-72a2-4a51-a061-06397061de3c" (UID: "1c3cde72-72a2-4a51-a061-06397061de3c"). InnerVolumeSpecName "local-storage17-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.357368 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c3cde72-72a2-4a51-a061-06397061de3c-scripts" (OuterVolumeSpecName: "scripts") pod "1c3cde72-72a2-4a51-a061-06397061de3c" (UID: "1c3cde72-72a2-4a51-a061-06397061de3c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.363100 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.363865 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3cde72-72a2-4a51-a061-06397061de3c-kube-api-access-kxgsq" (OuterVolumeSpecName: "kube-api-access-kxgsq") pod "1c3cde72-72a2-4a51-a061-06397061de3c" (UID: "1c3cde72-72a2-4a51-a061-06397061de3c"). InnerVolumeSpecName "kube-api-access-kxgsq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.390968 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1c3cde72-72a2-4a51-a061-06397061de3c-config-data" (OuterVolumeSpecName: "config-data") pod "1c3cde72-72a2-4a51-a061-06397061de3c" (UID: "1c3cde72-72a2-4a51-a061-06397061de3c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.416753 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a5d5c53d-eea5-4866-983f-8477eb16177b" path="/var/lib/kubelet/pods/a5d5c53d-eea5-4866-983f-8477eb16177b/volumes" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.445808 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.445944 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/1c3cde72-72a2-4a51-a061-06397061de3c-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.446014 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.446124 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.446370 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.446663 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.446684 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.446693 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/1c3cde72-72a2-4a51-a061-06397061de3c-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.446703 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/1c3cde72-72a2-4a51-a061-06397061de3c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.446712 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kxgsq\" (UniqueName: \"kubernetes.io/projected/1c3cde72-72a2-4a51-a061-06397061de3c-kube-api-access-kxgsq\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.446723 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.446731 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.446740 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/1c3cde72-72a2-4a51-a061-06397061de3c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.446748 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/1c3cde72-72a2-4a51-a061-06397061de3c-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.448678 4751 scope.go:117] "RemoveContainer" containerID="6735f01e0fcf3a4be1978348937bbe37355d165dbd9469a97f61438e89b09635" Jan 31 15:03:34 crc kubenswrapper[4751]: E0131 15:03:34.449034 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6735f01e0fcf3a4be1978348937bbe37355d165dbd9469a97f61438e89b09635\": container with ID starting with 6735f01e0fcf3a4be1978348937bbe37355d165dbd9469a97f61438e89b09635 not found: ID does not exist" containerID="6735f01e0fcf3a4be1978348937bbe37355d165dbd9469a97f61438e89b09635" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.449077 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6735f01e0fcf3a4be1978348937bbe37355d165dbd9469a97f61438e89b09635"} err="failed to get container status \"6735f01e0fcf3a4be1978348937bbe37355d165dbd9469a97f61438e89b09635\": rpc error: code = NotFound desc = could not find container \"6735f01e0fcf3a4be1978348937bbe37355d165dbd9469a97f61438e89b09635\": container with ID starting with 6735f01e0fcf3a4be1978348937bbe37355d165dbd9469a97f61438e89b09635 not found: ID does not exist" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.449099 4751 scope.go:117] "RemoveContainer" containerID="5f91babe44ec234dd474a96b96ebc6224d6c3a4f5e9aa9658f789a37025d6480" Jan 31 15:03:34 crc kubenswrapper[4751]: E0131 15:03:34.449962 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5f91babe44ec234dd474a96b96ebc6224d6c3a4f5e9aa9658f789a37025d6480\": container with ID starting with 5f91babe44ec234dd474a96b96ebc6224d6c3a4f5e9aa9658f789a37025d6480 not found: ID does not exist" containerID="5f91babe44ec234dd474a96b96ebc6224d6c3a4f5e9aa9658f789a37025d6480" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.449986 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5f91babe44ec234dd474a96b96ebc6224d6c3a4f5e9aa9658f789a37025d6480"} err="failed to get container status \"5f91babe44ec234dd474a96b96ebc6224d6c3a4f5e9aa9658f789a37025d6480\": rpc error: code = NotFound desc = could not find container \"5f91babe44ec234dd474a96b96ebc6224d6c3a4f5e9aa9658f789a37025d6480\": container with ID starting with 5f91babe44ec234dd474a96b96ebc6224d6c3a4f5e9aa9658f789a37025d6480 not found: ID does not exist" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.450000 4751 scope.go:117] "RemoveContainer" containerID="46e6d1ac28c6a7caa669a8786fb2699fde254ad44262a93b0f5099c64c3baee9" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.460268 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.461730 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage17-crc" (UniqueName: "kubernetes.io/local-volume/local-storage17-crc") on node "crc" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.470807 4751 scope.go:117] "RemoveContainer" containerID="1130e67347fe51af0e3a73480eea09807ed13c49b680ae612acbf9d7812bf42d" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.487087 4751 scope.go:117] "RemoveContainer" containerID="46e6d1ac28c6a7caa669a8786fb2699fde254ad44262a93b0f5099c64c3baee9" Jan 31 15:03:34 crc kubenswrapper[4751]: E0131 15:03:34.487467 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"46e6d1ac28c6a7caa669a8786fb2699fde254ad44262a93b0f5099c64c3baee9\": container with ID starting with 46e6d1ac28c6a7caa669a8786fb2699fde254ad44262a93b0f5099c64c3baee9 not found: ID does not exist" containerID="46e6d1ac28c6a7caa669a8786fb2699fde254ad44262a93b0f5099c64c3baee9" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.487572 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"46e6d1ac28c6a7caa669a8786fb2699fde254ad44262a93b0f5099c64c3baee9"} err="failed to get container status \"46e6d1ac28c6a7caa669a8786fb2699fde254ad44262a93b0f5099c64c3baee9\": rpc error: code = NotFound desc = could not find container \"46e6d1ac28c6a7caa669a8786fb2699fde254ad44262a93b0f5099c64c3baee9\": container with ID starting with 46e6d1ac28c6a7caa669a8786fb2699fde254ad44262a93b0f5099c64c3baee9 not found: ID does not exist" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.487648 4751 scope.go:117] "RemoveContainer" containerID="1130e67347fe51af0e3a73480eea09807ed13c49b680ae612acbf9d7812bf42d" Jan 31 15:03:34 crc kubenswrapper[4751]: E0131 15:03:34.488020 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1130e67347fe51af0e3a73480eea09807ed13c49b680ae612acbf9d7812bf42d\": container with ID starting with 1130e67347fe51af0e3a73480eea09807ed13c49b680ae612acbf9d7812bf42d not found: ID does not exist" containerID="1130e67347fe51af0e3a73480eea09807ed13c49b680ae612acbf9d7812bf42d" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.488088 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1130e67347fe51af0e3a73480eea09807ed13c49b680ae612acbf9d7812bf42d"} err="failed to get container status \"1130e67347fe51af0e3a73480eea09807ed13c49b680ae612acbf9d7812bf42d\": rpc error: code = NotFound desc = could not find container \"1130e67347fe51af0e3a73480eea09807ed13c49b680ae612acbf9d7812bf42d\": container with ID starting with 1130e67347fe51af0e3a73480eea09807ed13c49b680ae612acbf9d7812bf42d not found: ID does not exist" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.547594 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.547630 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.621282 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:03:34 crc kubenswrapper[4751]: I0131 15:03:34.626761 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.724091 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-pn752"] Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.731596 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-pn752"] Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.791762 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance5797-account-delete-bfbxf"] Jan 31 15:03:35 crc kubenswrapper[4751]: E0131 15:03:35.792323 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d6d7db-fc12-479e-aedf-8ef829bf01e5" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.792420 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d6d7db-fc12-479e-aedf-8ef829bf01e5" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: E0131 15:03:35.792493 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d5c53d-eea5-4866-983f-8477eb16177b" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.792566 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d5c53d-eea5-4866-983f-8477eb16177b" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: E0131 15:03:35.792752 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40930074-48c4-404d-a55c-bb8a4f581f56" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.792827 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="40930074-48c4-404d-a55c-bb8a4f581f56" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: E0131 15:03:35.792915 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2142d4ca-115a-49b7-8f50-ac020fdbc342" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.792985 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2142d4ca-115a-49b7-8f50-ac020fdbc342" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: E0131 15:03:35.793060 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a5d5c53d-eea5-4866-983f-8477eb16177b" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.793148 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="a5d5c53d-eea5-4866-983f-8477eb16177b" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: E0131 15:03:35.793245 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2142d4ca-115a-49b7-8f50-ac020fdbc342" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.793321 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2142d4ca-115a-49b7-8f50-ac020fdbc342" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: E0131 15:03:35.793405 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d398be-aa92-4a00-933b-549a0c4e4ad7" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.793470 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d398be-aa92-4a00-933b-549a0c4e4ad7" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: E0131 15:03:35.793581 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3cde72-72a2-4a51-a061-06397061de3c" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.793650 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3cde72-72a2-4a51-a061-06397061de3c" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: E0131 15:03:35.793748 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="63d398be-aa92-4a00-933b-549a0c4e4ad7" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.793817 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="63d398be-aa92-4a00-933b-549a0c4e4ad7" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: E0131 15:03:35.793894 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="40930074-48c4-404d-a55c-bb8a4f581f56" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.793963 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="40930074-48c4-404d-a55c-bb8a4f581f56" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: E0131 15:03:35.794044 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3cde72-72a2-4a51-a061-06397061de3c" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.794138 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3cde72-72a2-4a51-a061-06397061de3c" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: E0131 15:03:35.794207 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f3d6d7db-fc12-479e-aedf-8ef829bf01e5" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.794290 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f3d6d7db-fc12-479e-aedf-8ef829bf01e5" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.794505 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d5c53d-eea5-4866-983f-8477eb16177b" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.794580 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2142d4ca-115a-49b7-8f50-ac020fdbc342" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.794648 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="40930074-48c4-404d-a55c-bb8a4f581f56" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.794715 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="40930074-48c4-404d-a55c-bb8a4f581f56" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.794780 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d6d7db-fc12-479e-aedf-8ef829bf01e5" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.794857 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f3d6d7db-fc12-479e-aedf-8ef829bf01e5" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.794941 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="a5d5c53d-eea5-4866-983f-8477eb16177b" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.795014 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3cde72-72a2-4a51-a061-06397061de3c" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.795102 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2142d4ca-115a-49b7-8f50-ac020fdbc342" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.795176 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d398be-aa92-4a00-933b-549a0c4e4ad7" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.795249 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3cde72-72a2-4a51-a061-06397061de3c" containerName="glance-log" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.795336 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="63d398be-aa92-4a00-933b-549a0c4e4ad7" containerName="glance-httpd" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.796004 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance5797-account-delete-bfbxf" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.805127 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance5797-account-delete-bfbxf"] Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.867768 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31a4e4de-2e47-46c3-8b73-06c6a7fe5282-operator-scripts\") pod \"glance5797-account-delete-bfbxf\" (UID: \"31a4e4de-2e47-46c3-8b73-06c6a7fe5282\") " pod="glance-kuttl-tests/glance5797-account-delete-bfbxf" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.867842 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r5d7p\" (UniqueName: \"kubernetes.io/projected/31a4e4de-2e47-46c3-8b73-06c6a7fe5282-kube-api-access-r5d7p\") pod \"glance5797-account-delete-bfbxf\" (UID: \"31a4e4de-2e47-46c3-8b73-06c6a7fe5282\") " pod="glance-kuttl-tests/glance5797-account-delete-bfbxf" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.969017 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31a4e4de-2e47-46c3-8b73-06c6a7fe5282-operator-scripts\") pod \"glance5797-account-delete-bfbxf\" (UID: \"31a4e4de-2e47-46c3-8b73-06c6a7fe5282\") " pod="glance-kuttl-tests/glance5797-account-delete-bfbxf" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.969133 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-r5d7p\" (UniqueName: \"kubernetes.io/projected/31a4e4de-2e47-46c3-8b73-06c6a7fe5282-kube-api-access-r5d7p\") pod \"glance5797-account-delete-bfbxf\" (UID: \"31a4e4de-2e47-46c3-8b73-06c6a7fe5282\") " pod="glance-kuttl-tests/glance5797-account-delete-bfbxf" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.969723 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31a4e4de-2e47-46c3-8b73-06c6a7fe5282-operator-scripts\") pod \"glance5797-account-delete-bfbxf\" (UID: \"31a4e4de-2e47-46c3-8b73-06c6a7fe5282\") " pod="glance-kuttl-tests/glance5797-account-delete-bfbxf" Jan 31 15:03:35 crc kubenswrapper[4751]: I0131 15:03:35.999459 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-r5d7p\" (UniqueName: \"kubernetes.io/projected/31a4e4de-2e47-46c3-8b73-06c6a7fe5282-kube-api-access-r5d7p\") pod \"glance5797-account-delete-bfbxf\" (UID: \"31a4e4de-2e47-46c3-8b73-06c6a7fe5282\") " pod="glance-kuttl-tests/glance5797-account-delete-bfbxf" Jan 31 15:03:36 crc kubenswrapper[4751]: I0131 15:03:36.112038 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance5797-account-delete-bfbxf" Jan 31 15:03:36 crc kubenswrapper[4751]: I0131 15:03:36.419458 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1c3cde72-72a2-4a51-a061-06397061de3c" path="/var/lib/kubelet/pods/1c3cde72-72a2-4a51-a061-06397061de3c/volumes" Jan 31 15:03:36 crc kubenswrapper[4751]: I0131 15:03:36.420758 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d350f693-ea74-48d5-a7a7-3fa3264174ca" path="/var/lib/kubelet/pods/d350f693-ea74-48d5-a7a7-3fa3264174ca/volumes" Jan 31 15:03:36 crc kubenswrapper[4751]: I0131 15:03:36.530709 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance5797-account-delete-bfbxf"] Jan 31 15:03:37 crc kubenswrapper[4751]: I0131 15:03:37.340427 4751 generic.go:334] "Generic (PLEG): container finished" podID="31a4e4de-2e47-46c3-8b73-06c6a7fe5282" containerID="04a2620fed6cde572c43eab031fe61d9c4a7478ffe007510ee4e0e1e7a876ff4" exitCode=0 Jan 31 15:03:37 crc kubenswrapper[4751]: I0131 15:03:37.340730 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance5797-account-delete-bfbxf" event={"ID":"31a4e4de-2e47-46c3-8b73-06c6a7fe5282","Type":"ContainerDied","Data":"04a2620fed6cde572c43eab031fe61d9c4a7478ffe007510ee4e0e1e7a876ff4"} Jan 31 15:03:37 crc kubenswrapper[4751]: I0131 15:03:37.340784 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance5797-account-delete-bfbxf" event={"ID":"31a4e4de-2e47-46c3-8b73-06c6a7fe5282","Type":"ContainerStarted","Data":"8db058d18edbc68f572a370f2e2fbcbf76b515e8af7d5c00e3296e137637bb91"} Jan 31 15:03:38 crc kubenswrapper[4751]: I0131 15:03:38.651420 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance5797-account-delete-bfbxf" Jan 31 15:03:38 crc kubenswrapper[4751]: I0131 15:03:38.703726 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31a4e4de-2e47-46c3-8b73-06c6a7fe5282-operator-scripts\") pod \"31a4e4de-2e47-46c3-8b73-06c6a7fe5282\" (UID: \"31a4e4de-2e47-46c3-8b73-06c6a7fe5282\") " Jan 31 15:03:38 crc kubenswrapper[4751]: I0131 15:03:38.703805 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r5d7p\" (UniqueName: \"kubernetes.io/projected/31a4e4de-2e47-46c3-8b73-06c6a7fe5282-kube-api-access-r5d7p\") pod \"31a4e4de-2e47-46c3-8b73-06c6a7fe5282\" (UID: \"31a4e4de-2e47-46c3-8b73-06c6a7fe5282\") " Jan 31 15:03:38 crc kubenswrapper[4751]: I0131 15:03:38.705247 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31a4e4de-2e47-46c3-8b73-06c6a7fe5282-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "31a4e4de-2e47-46c3-8b73-06c6a7fe5282" (UID: "31a4e4de-2e47-46c3-8b73-06c6a7fe5282"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:38 crc kubenswrapper[4751]: I0131 15:03:38.711633 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31a4e4de-2e47-46c3-8b73-06c6a7fe5282-kube-api-access-r5d7p" (OuterVolumeSpecName: "kube-api-access-r5d7p") pod "31a4e4de-2e47-46c3-8b73-06c6a7fe5282" (UID: "31a4e4de-2e47-46c3-8b73-06c6a7fe5282"). InnerVolumeSpecName "kube-api-access-r5d7p". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:38 crc kubenswrapper[4751]: I0131 15:03:38.805047 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-r5d7p\" (UniqueName: \"kubernetes.io/projected/31a4e4de-2e47-46c3-8b73-06c6a7fe5282-kube-api-access-r5d7p\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:38 crc kubenswrapper[4751]: I0131 15:03:38.805100 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/31a4e4de-2e47-46c3-8b73-06c6a7fe5282-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:39 crc kubenswrapper[4751]: I0131 15:03:39.358264 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance5797-account-delete-bfbxf" event={"ID":"31a4e4de-2e47-46c3-8b73-06c6a7fe5282","Type":"ContainerDied","Data":"8db058d18edbc68f572a370f2e2fbcbf76b515e8af7d5c00e3296e137637bb91"} Jan 31 15:03:39 crc kubenswrapper[4751]: I0131 15:03:39.358302 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8db058d18edbc68f572a370f2e2fbcbf76b515e8af7d5c00e3296e137637bb91" Jan 31 15:03:39 crc kubenswrapper[4751]: I0131 15:03:39.358306 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance5797-account-delete-bfbxf" Jan 31 15:03:40 crc kubenswrapper[4751]: I0131 15:03:40.822156 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-b924d"] Jan 31 15:03:40 crc kubenswrapper[4751]: I0131 15:03:40.829767 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-b924d"] Jan 31 15:03:40 crc kubenswrapper[4751]: I0131 15:03:40.837475 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance5797-account-delete-bfbxf"] Jan 31 15:03:40 crc kubenswrapper[4751]: I0131 15:03:40.845785 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-5797-account-create-update-hfdl2"] Jan 31 15:03:40 crc kubenswrapper[4751]: I0131 15:03:40.852728 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance5797-account-delete-bfbxf"] Jan 31 15:03:40 crc kubenswrapper[4751]: I0131 15:03:40.858038 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-5797-account-create-update-hfdl2"] Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.480624 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-633a-account-create-update-kmj9d"] Jan 31 15:03:41 crc kubenswrapper[4751]: E0131 15:03:41.481671 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="31a4e4de-2e47-46c3-8b73-06c6a7fe5282" containerName="mariadb-account-delete" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.481705 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="31a4e4de-2e47-46c3-8b73-06c6a7fe5282" containerName="mariadb-account-delete" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.482031 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="31a4e4de-2e47-46c3-8b73-06c6a7fe5282" containerName="mariadb-account-delete" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.482827 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-633a-account-create-update-kmj9d" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.487580 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.489242 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-9z5l9"] Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.490251 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-9z5l9" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.496335 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-633a-account-create-update-kmj9d"] Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.507096 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-9z5l9"] Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.644015 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d66e78e-6853-45e7-966f-cd9ec9586439-operator-scripts\") pod \"glance-db-create-9z5l9\" (UID: \"4d66e78e-6853-45e7-966f-cd9ec9586439\") " pod="glance-kuttl-tests/glance-db-create-9z5l9" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.644058 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hcl4h\" (UniqueName: \"kubernetes.io/projected/8028c623-f182-4e00-9c6d-c864a023abb5-kube-api-access-hcl4h\") pod \"glance-633a-account-create-update-kmj9d\" (UID: \"8028c623-f182-4e00-9c6d-c864a023abb5\") " pod="glance-kuttl-tests/glance-633a-account-create-update-kmj9d" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.644180 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnr8g\" (UniqueName: \"kubernetes.io/projected/4d66e78e-6853-45e7-966f-cd9ec9586439-kube-api-access-qnr8g\") pod \"glance-db-create-9z5l9\" (UID: \"4d66e78e-6853-45e7-966f-cd9ec9586439\") " pod="glance-kuttl-tests/glance-db-create-9z5l9" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.644212 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8028c623-f182-4e00-9c6d-c864a023abb5-operator-scripts\") pod \"glance-633a-account-create-update-kmj9d\" (UID: \"8028c623-f182-4e00-9c6d-c864a023abb5\") " pod="glance-kuttl-tests/glance-633a-account-create-update-kmj9d" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.745542 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qnr8g\" (UniqueName: \"kubernetes.io/projected/4d66e78e-6853-45e7-966f-cd9ec9586439-kube-api-access-qnr8g\") pod \"glance-db-create-9z5l9\" (UID: \"4d66e78e-6853-45e7-966f-cd9ec9586439\") " pod="glance-kuttl-tests/glance-db-create-9z5l9" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.745616 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8028c623-f182-4e00-9c6d-c864a023abb5-operator-scripts\") pod \"glance-633a-account-create-update-kmj9d\" (UID: \"8028c623-f182-4e00-9c6d-c864a023abb5\") " pod="glance-kuttl-tests/glance-633a-account-create-update-kmj9d" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.745698 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d66e78e-6853-45e7-966f-cd9ec9586439-operator-scripts\") pod \"glance-db-create-9z5l9\" (UID: \"4d66e78e-6853-45e7-966f-cd9ec9586439\") " pod="glance-kuttl-tests/glance-db-create-9z5l9" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.745730 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hcl4h\" (UniqueName: \"kubernetes.io/projected/8028c623-f182-4e00-9c6d-c864a023abb5-kube-api-access-hcl4h\") pod \"glance-633a-account-create-update-kmj9d\" (UID: \"8028c623-f182-4e00-9c6d-c864a023abb5\") " pod="glance-kuttl-tests/glance-633a-account-create-update-kmj9d" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.747020 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8028c623-f182-4e00-9c6d-c864a023abb5-operator-scripts\") pod \"glance-633a-account-create-update-kmj9d\" (UID: \"8028c623-f182-4e00-9c6d-c864a023abb5\") " pod="glance-kuttl-tests/glance-633a-account-create-update-kmj9d" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.747629 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d66e78e-6853-45e7-966f-cd9ec9586439-operator-scripts\") pod \"glance-db-create-9z5l9\" (UID: \"4d66e78e-6853-45e7-966f-cd9ec9586439\") " pod="glance-kuttl-tests/glance-db-create-9z5l9" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.766672 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnr8g\" (UniqueName: \"kubernetes.io/projected/4d66e78e-6853-45e7-966f-cd9ec9586439-kube-api-access-qnr8g\") pod \"glance-db-create-9z5l9\" (UID: \"4d66e78e-6853-45e7-966f-cd9ec9586439\") " pod="glance-kuttl-tests/glance-db-create-9z5l9" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.766990 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hcl4h\" (UniqueName: \"kubernetes.io/projected/8028c623-f182-4e00-9c6d-c864a023abb5-kube-api-access-hcl4h\") pod \"glance-633a-account-create-update-kmj9d\" (UID: \"8028c623-f182-4e00-9c6d-c864a023abb5\") " pod="glance-kuttl-tests/glance-633a-account-create-update-kmj9d" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.804259 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-633a-account-create-update-kmj9d" Jan 31 15:03:41 crc kubenswrapper[4751]: I0131 15:03:41.814125 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-9z5l9" Jan 31 15:03:42 crc kubenswrapper[4751]: I0131 15:03:42.107656 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-9z5l9"] Jan 31 15:03:42 crc kubenswrapper[4751]: I0131 15:03:42.257470 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-633a-account-create-update-kmj9d"] Jan 31 15:03:42 crc kubenswrapper[4751]: W0131 15:03:42.265213 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8028c623_f182_4e00_9c6d_c864a023abb5.slice/crio-56637ac13b21967f19a3fac42b6a3500babb3afbe65dda72046eb3b31f096f6c WatchSource:0}: Error finding container 56637ac13b21967f19a3fac42b6a3500babb3afbe65dda72046eb3b31f096f6c: Status 404 returned error can't find the container with id 56637ac13b21967f19a3fac42b6a3500babb3afbe65dda72046eb3b31f096f6c Jan 31 15:03:42 crc kubenswrapper[4751]: I0131 15:03:42.383514 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-633a-account-create-update-kmj9d" event={"ID":"8028c623-f182-4e00-9c6d-c864a023abb5","Type":"ContainerStarted","Data":"25f84d0f51f45c02503d2025ee5bcd86d54fb4126f654afe8e8c27150f9da926"} Jan 31 15:03:42 crc kubenswrapper[4751]: I0131 15:03:42.383613 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-633a-account-create-update-kmj9d" event={"ID":"8028c623-f182-4e00-9c6d-c864a023abb5","Type":"ContainerStarted","Data":"56637ac13b21967f19a3fac42b6a3500babb3afbe65dda72046eb3b31f096f6c"} Jan 31 15:03:42 crc kubenswrapper[4751]: I0131 15:03:42.388335 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-9z5l9" event={"ID":"4d66e78e-6853-45e7-966f-cd9ec9586439","Type":"ContainerStarted","Data":"2e90cc31ff36ceaadebe3379b42c48741b099a92854117d92e72b66bca77ad69"} Jan 31 15:03:42 crc kubenswrapper[4751]: I0131 15:03:42.388382 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-9z5l9" event={"ID":"4d66e78e-6853-45e7-966f-cd9ec9586439","Type":"ContainerStarted","Data":"dfb2dd4464903e172bc85af5fa5fcbb80bcb16e0435c60c5df0d3d28f7986ca4"} Jan 31 15:03:42 crc kubenswrapper[4751]: I0131 15:03:42.400651 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-633a-account-create-update-kmj9d" podStartSLOduration=1.400625856 podStartE2EDuration="1.400625856s" podCreationTimestamp="2026-01-31 15:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:42.39775125 +0000 UTC m=+1326.772464135" watchObservedRunningTime="2026-01-31 15:03:42.400625856 +0000 UTC m=+1326.775338741" Jan 31 15:03:42 crc kubenswrapper[4751]: I0131 15:03:42.414135 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31a4e4de-2e47-46c3-8b73-06c6a7fe5282" path="/var/lib/kubelet/pods/31a4e4de-2e47-46c3-8b73-06c6a7fe5282/volumes" Jan 31 15:03:42 crc kubenswrapper[4751]: I0131 15:03:42.414808 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="58c33299-57ac-4fc9-9751-b521d31e60cf" path="/var/lib/kubelet/pods/58c33299-57ac-4fc9-9751-b521d31e60cf/volumes" Jan 31 15:03:42 crc kubenswrapper[4751]: I0131 15:03:42.415281 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="896f2e37-3440-46e7-81ed-2805ab336470" path="/var/lib/kubelet/pods/896f2e37-3440-46e7-81ed-2805ab336470/volumes" Jan 31 15:03:42 crc kubenswrapper[4751]: I0131 15:03:42.417986 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-create-9z5l9" podStartSLOduration=1.4179702330000001 podStartE2EDuration="1.417970233s" podCreationTimestamp="2026-01-31 15:03:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:42.416964237 +0000 UTC m=+1326.791677112" watchObservedRunningTime="2026-01-31 15:03:42.417970233 +0000 UTC m=+1326.792683118" Jan 31 15:03:43 crc kubenswrapper[4751]: I0131 15:03:43.400031 4751 generic.go:334] "Generic (PLEG): container finished" podID="8028c623-f182-4e00-9c6d-c864a023abb5" containerID="25f84d0f51f45c02503d2025ee5bcd86d54fb4126f654afe8e8c27150f9da926" exitCode=0 Jan 31 15:03:43 crc kubenswrapper[4751]: I0131 15:03:43.400130 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-633a-account-create-update-kmj9d" event={"ID":"8028c623-f182-4e00-9c6d-c864a023abb5","Type":"ContainerDied","Data":"25f84d0f51f45c02503d2025ee5bcd86d54fb4126f654afe8e8c27150f9da926"} Jan 31 15:03:43 crc kubenswrapper[4751]: I0131 15:03:43.403990 4751 generic.go:334] "Generic (PLEG): container finished" podID="4d66e78e-6853-45e7-966f-cd9ec9586439" containerID="2e90cc31ff36ceaadebe3379b42c48741b099a92854117d92e72b66bca77ad69" exitCode=0 Jan 31 15:03:43 crc kubenswrapper[4751]: I0131 15:03:43.404037 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-9z5l9" event={"ID":"4d66e78e-6853-45e7-966f-cd9ec9586439","Type":"ContainerDied","Data":"2e90cc31ff36ceaadebe3379b42c48741b099a92854117d92e72b66bca77ad69"} Jan 31 15:03:44 crc kubenswrapper[4751]: I0131 15:03:44.769667 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-9z5l9" Jan 31 15:03:44 crc kubenswrapper[4751]: I0131 15:03:44.776469 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-633a-account-create-update-kmj9d" Jan 31 15:03:44 crc kubenswrapper[4751]: I0131 15:03:44.794208 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8028c623-f182-4e00-9c6d-c864a023abb5-operator-scripts\") pod \"8028c623-f182-4e00-9c6d-c864a023abb5\" (UID: \"8028c623-f182-4e00-9c6d-c864a023abb5\") " Jan 31 15:03:44 crc kubenswrapper[4751]: I0131 15:03:44.794290 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d66e78e-6853-45e7-966f-cd9ec9586439-operator-scripts\") pod \"4d66e78e-6853-45e7-966f-cd9ec9586439\" (UID: \"4d66e78e-6853-45e7-966f-cd9ec9586439\") " Jan 31 15:03:44 crc kubenswrapper[4751]: I0131 15:03:44.794378 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qnr8g\" (UniqueName: \"kubernetes.io/projected/4d66e78e-6853-45e7-966f-cd9ec9586439-kube-api-access-qnr8g\") pod \"4d66e78e-6853-45e7-966f-cd9ec9586439\" (UID: \"4d66e78e-6853-45e7-966f-cd9ec9586439\") " Jan 31 15:03:44 crc kubenswrapper[4751]: I0131 15:03:44.794421 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hcl4h\" (UniqueName: \"kubernetes.io/projected/8028c623-f182-4e00-9c6d-c864a023abb5-kube-api-access-hcl4h\") pod \"8028c623-f182-4e00-9c6d-c864a023abb5\" (UID: \"8028c623-f182-4e00-9c6d-c864a023abb5\") " Jan 31 15:03:44 crc kubenswrapper[4751]: I0131 15:03:44.802307 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8028c623-f182-4e00-9c6d-c864a023abb5-kube-api-access-hcl4h" (OuterVolumeSpecName: "kube-api-access-hcl4h") pod "8028c623-f182-4e00-9c6d-c864a023abb5" (UID: "8028c623-f182-4e00-9c6d-c864a023abb5"). InnerVolumeSpecName "kube-api-access-hcl4h". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:44 crc kubenswrapper[4751]: I0131 15:03:44.803936 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4d66e78e-6853-45e7-966f-cd9ec9586439-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "4d66e78e-6853-45e7-966f-cd9ec9586439" (UID: "4d66e78e-6853-45e7-966f-cd9ec9586439"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:44 crc kubenswrapper[4751]: I0131 15:03:44.804990 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8028c623-f182-4e00-9c6d-c864a023abb5-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "8028c623-f182-4e00-9c6d-c864a023abb5" (UID: "8028c623-f182-4e00-9c6d-c864a023abb5"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:03:44 crc kubenswrapper[4751]: I0131 15:03:44.810717 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4d66e78e-6853-45e7-966f-cd9ec9586439-kube-api-access-qnr8g" (OuterVolumeSpecName: "kube-api-access-qnr8g") pod "4d66e78e-6853-45e7-966f-cd9ec9586439" (UID: "4d66e78e-6853-45e7-966f-cd9ec9586439"). InnerVolumeSpecName "kube-api-access-qnr8g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:44 crc kubenswrapper[4751]: I0131 15:03:44.895578 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hcl4h\" (UniqueName: \"kubernetes.io/projected/8028c623-f182-4e00-9c6d-c864a023abb5-kube-api-access-hcl4h\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:44 crc kubenswrapper[4751]: I0131 15:03:44.895614 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/8028c623-f182-4e00-9c6d-c864a023abb5-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:44 crc kubenswrapper[4751]: I0131 15:03:44.895623 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/4d66e78e-6853-45e7-966f-cd9ec9586439-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:44 crc kubenswrapper[4751]: I0131 15:03:44.895631 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qnr8g\" (UniqueName: \"kubernetes.io/projected/4d66e78e-6853-45e7-966f-cd9ec9586439-kube-api-access-qnr8g\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:45 crc kubenswrapper[4751]: I0131 15:03:45.429238 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-633a-account-create-update-kmj9d" event={"ID":"8028c623-f182-4e00-9c6d-c864a023abb5","Type":"ContainerDied","Data":"56637ac13b21967f19a3fac42b6a3500babb3afbe65dda72046eb3b31f096f6c"} Jan 31 15:03:45 crc kubenswrapper[4751]: I0131 15:03:45.429598 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="56637ac13b21967f19a3fac42b6a3500babb3afbe65dda72046eb3b31f096f6c" Jan 31 15:03:45 crc kubenswrapper[4751]: I0131 15:03:45.429368 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-633a-account-create-update-kmj9d" Jan 31 15:03:45 crc kubenswrapper[4751]: I0131 15:03:45.432759 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-9z5l9" Jan 31 15:03:45 crc kubenswrapper[4751]: I0131 15:03:45.433050 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-9z5l9" event={"ID":"4d66e78e-6853-45e7-966f-cd9ec9586439","Type":"ContainerDied","Data":"dfb2dd4464903e172bc85af5fa5fcbb80bcb16e0435c60c5df0d3d28f7986ca4"} Jan 31 15:03:45 crc kubenswrapper[4751]: I0131 15:03:45.433283 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="dfb2dd4464903e172bc85af5fa5fcbb80bcb16e0435c60c5df0d3d28f7986ca4" Jan 31 15:03:45 crc kubenswrapper[4751]: I0131 15:03:45.711171 4751 scope.go:117] "RemoveContainer" containerID="c5d30bd3425343861aefae2acc945d17403c59649b3737361473864cd06659ea" Jan 31 15:03:45 crc kubenswrapper[4751]: I0131 15:03:45.731322 4751 scope.go:117] "RemoveContainer" containerID="8f5e6c80881d23c78dc00e4be207273e5eb7f1474c90cd90d5b02783a4206916" Jan 31 15:03:45 crc kubenswrapper[4751]: I0131 15:03:45.774885 4751 scope.go:117] "RemoveContainer" containerID="0a2dcc31122c7c5482843a5e80399a6846c7271da25c796eef9ce298a6180701" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.693164 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-zzxfv"] Jan 31 15:03:46 crc kubenswrapper[4751]: E0131 15:03:46.693646 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8028c623-f182-4e00-9c6d-c864a023abb5" containerName="mariadb-account-create-update" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.693676 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8028c623-f182-4e00-9c6d-c864a023abb5" containerName="mariadb-account-create-update" Jan 31 15:03:46 crc kubenswrapper[4751]: E0131 15:03:46.693707 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4d66e78e-6853-45e7-966f-cd9ec9586439" containerName="mariadb-database-create" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.693726 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="4d66e78e-6853-45e7-966f-cd9ec9586439" containerName="mariadb-database-create" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.693959 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8028c623-f182-4e00-9c6d-c864a023abb5" containerName="mariadb-account-create-update" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.694019 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="4d66e78e-6853-45e7-966f-cd9ec9586439" containerName="mariadb-database-create" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.694783 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-zzxfv" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.702136 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-zzxfv"] Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.702276 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-lgkvw" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.702276 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.723125 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n6f8n\" (UniqueName: \"kubernetes.io/projected/e91c4dc3-9319-4e4b-951a-4e1f117c3215-kube-api-access-n6f8n\") pod \"glance-db-sync-zzxfv\" (UID: \"e91c4dc3-9319-4e4b-951a-4e1f117c3215\") " pod="glance-kuttl-tests/glance-db-sync-zzxfv" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.723184 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91c4dc3-9319-4e4b-951a-4e1f117c3215-config-data\") pod \"glance-db-sync-zzxfv\" (UID: \"e91c4dc3-9319-4e4b-951a-4e1f117c3215\") " pod="glance-kuttl-tests/glance-db-sync-zzxfv" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.723203 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e91c4dc3-9319-4e4b-951a-4e1f117c3215-db-sync-config-data\") pod \"glance-db-sync-zzxfv\" (UID: \"e91c4dc3-9319-4e4b-951a-4e1f117c3215\") " pod="glance-kuttl-tests/glance-db-sync-zzxfv" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.824481 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91c4dc3-9319-4e4b-951a-4e1f117c3215-config-data\") pod \"glance-db-sync-zzxfv\" (UID: \"e91c4dc3-9319-4e4b-951a-4e1f117c3215\") " pod="glance-kuttl-tests/glance-db-sync-zzxfv" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.824533 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e91c4dc3-9319-4e4b-951a-4e1f117c3215-db-sync-config-data\") pod \"glance-db-sync-zzxfv\" (UID: \"e91c4dc3-9319-4e4b-951a-4e1f117c3215\") " pod="glance-kuttl-tests/glance-db-sync-zzxfv" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.824642 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n6f8n\" (UniqueName: \"kubernetes.io/projected/e91c4dc3-9319-4e4b-951a-4e1f117c3215-kube-api-access-n6f8n\") pod \"glance-db-sync-zzxfv\" (UID: \"e91c4dc3-9319-4e4b-951a-4e1f117c3215\") " pod="glance-kuttl-tests/glance-db-sync-zzxfv" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.829349 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e91c4dc3-9319-4e4b-951a-4e1f117c3215-db-sync-config-data\") pod \"glance-db-sync-zzxfv\" (UID: \"e91c4dc3-9319-4e4b-951a-4e1f117c3215\") " pod="glance-kuttl-tests/glance-db-sync-zzxfv" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.838829 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91c4dc3-9319-4e4b-951a-4e1f117c3215-config-data\") pod \"glance-db-sync-zzxfv\" (UID: \"e91c4dc3-9319-4e4b-951a-4e1f117c3215\") " pod="glance-kuttl-tests/glance-db-sync-zzxfv" Jan 31 15:03:46 crc kubenswrapper[4751]: I0131 15:03:46.849169 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n6f8n\" (UniqueName: \"kubernetes.io/projected/e91c4dc3-9319-4e4b-951a-4e1f117c3215-kube-api-access-n6f8n\") pod \"glance-db-sync-zzxfv\" (UID: \"e91c4dc3-9319-4e4b-951a-4e1f117c3215\") " pod="glance-kuttl-tests/glance-db-sync-zzxfv" Jan 31 15:03:47 crc kubenswrapper[4751]: I0131 15:03:47.014512 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-zzxfv" Jan 31 15:03:47 crc kubenswrapper[4751]: I0131 15:03:47.435916 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-zzxfv"] Jan 31 15:03:47 crc kubenswrapper[4751]: W0131 15:03:47.440219 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pode91c4dc3_9319_4e4b_951a_4e1f117c3215.slice/crio-6a6ad5508209ce047cf73ef6b50355c4bb3cbbab034952d341b7d096d51f510f WatchSource:0}: Error finding container 6a6ad5508209ce047cf73ef6b50355c4bb3cbbab034952d341b7d096d51f510f: Status 404 returned error can't find the container with id 6a6ad5508209ce047cf73ef6b50355c4bb3cbbab034952d341b7d096d51f510f Jan 31 15:03:48 crc kubenswrapper[4751]: I0131 15:03:48.458148 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-zzxfv" event={"ID":"e91c4dc3-9319-4e4b-951a-4e1f117c3215","Type":"ContainerStarted","Data":"15a7c13661f7a9dc9cca48ea38cbda46b049856ab09d05ef63e1d7c0a14b8bb5"} Jan 31 15:03:48 crc kubenswrapper[4751]: I0131 15:03:48.458519 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-zzxfv" event={"ID":"e91c4dc3-9319-4e4b-951a-4e1f117c3215","Type":"ContainerStarted","Data":"6a6ad5508209ce047cf73ef6b50355c4bb3cbbab034952d341b7d096d51f510f"} Jan 31 15:03:48 crc kubenswrapper[4751]: I0131 15:03:48.477155 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-zzxfv" podStartSLOduration=2.477138016 podStartE2EDuration="2.477138016s" podCreationTimestamp="2026-01-31 15:03:46 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:48.472112284 +0000 UTC m=+1332.846825189" watchObservedRunningTime="2026-01-31 15:03:48.477138016 +0000 UTC m=+1332.851850901" Jan 31 15:03:51 crc kubenswrapper[4751]: I0131 15:03:51.488725 4751 generic.go:334] "Generic (PLEG): container finished" podID="e91c4dc3-9319-4e4b-951a-4e1f117c3215" containerID="15a7c13661f7a9dc9cca48ea38cbda46b049856ab09d05ef63e1d7c0a14b8bb5" exitCode=0 Jan 31 15:03:51 crc kubenswrapper[4751]: I0131 15:03:51.489040 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-zzxfv" event={"ID":"e91c4dc3-9319-4e4b-951a-4e1f117c3215","Type":"ContainerDied","Data":"15a7c13661f7a9dc9cca48ea38cbda46b049856ab09d05ef63e1d7c0a14b8bb5"} Jan 31 15:03:52 crc kubenswrapper[4751]: I0131 15:03:52.814273 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-zzxfv" Jan 31 15:03:52 crc kubenswrapper[4751]: I0131 15:03:52.914208 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e91c4dc3-9319-4e4b-951a-4e1f117c3215-db-sync-config-data\") pod \"e91c4dc3-9319-4e4b-951a-4e1f117c3215\" (UID: \"e91c4dc3-9319-4e4b-951a-4e1f117c3215\") " Jan 31 15:03:52 crc kubenswrapper[4751]: I0131 15:03:52.914360 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n6f8n\" (UniqueName: \"kubernetes.io/projected/e91c4dc3-9319-4e4b-951a-4e1f117c3215-kube-api-access-n6f8n\") pod \"e91c4dc3-9319-4e4b-951a-4e1f117c3215\" (UID: \"e91c4dc3-9319-4e4b-951a-4e1f117c3215\") " Jan 31 15:03:52 crc kubenswrapper[4751]: I0131 15:03:52.914407 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91c4dc3-9319-4e4b-951a-4e1f117c3215-config-data\") pod \"e91c4dc3-9319-4e4b-951a-4e1f117c3215\" (UID: \"e91c4dc3-9319-4e4b-951a-4e1f117c3215\") " Jan 31 15:03:52 crc kubenswrapper[4751]: I0131 15:03:52.920212 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e91c4dc3-9319-4e4b-951a-4e1f117c3215-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "e91c4dc3-9319-4e4b-951a-4e1f117c3215" (UID: "e91c4dc3-9319-4e4b-951a-4e1f117c3215"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:52 crc kubenswrapper[4751]: I0131 15:03:52.920262 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e91c4dc3-9319-4e4b-951a-4e1f117c3215-kube-api-access-n6f8n" (OuterVolumeSpecName: "kube-api-access-n6f8n") pod "e91c4dc3-9319-4e4b-951a-4e1f117c3215" (UID: "e91c4dc3-9319-4e4b-951a-4e1f117c3215"). InnerVolumeSpecName "kube-api-access-n6f8n". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:52 crc kubenswrapper[4751]: I0131 15:03:52.959981 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e91c4dc3-9319-4e4b-951a-4e1f117c3215-config-data" (OuterVolumeSpecName: "config-data") pod "e91c4dc3-9319-4e4b-951a-4e1f117c3215" (UID: "e91c4dc3-9319-4e4b-951a-4e1f117c3215"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:53 crc kubenswrapper[4751]: I0131 15:03:53.016151 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n6f8n\" (UniqueName: \"kubernetes.io/projected/e91c4dc3-9319-4e4b-951a-4e1f117c3215-kube-api-access-n6f8n\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:53 crc kubenswrapper[4751]: I0131 15:03:53.016188 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e91c4dc3-9319-4e4b-951a-4e1f117c3215-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:53 crc kubenswrapper[4751]: I0131 15:03:53.016201 4751 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/e91c4dc3-9319-4e4b-951a-4e1f117c3215-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:53 crc kubenswrapper[4751]: I0131 15:03:53.513444 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-zzxfv" event={"ID":"e91c4dc3-9319-4e4b-951a-4e1f117c3215","Type":"ContainerDied","Data":"6a6ad5508209ce047cf73ef6b50355c4bb3cbbab034952d341b7d096d51f510f"} Jan 31 15:03:53 crc kubenswrapper[4751]: I0131 15:03:53.513927 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a6ad5508209ce047cf73ef6b50355c4bb3cbbab034952d341b7d096d51f510f" Jan 31 15:03:53 crc kubenswrapper[4751]: I0131 15:03:53.513495 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-zzxfv" Jan 31 15:03:53 crc kubenswrapper[4751]: I0131 15:03:53.922380 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:03:53 crc kubenswrapper[4751]: E0131 15:03:53.922631 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e91c4dc3-9319-4e4b-951a-4e1f117c3215" containerName="glance-db-sync" Jan 31 15:03:53 crc kubenswrapper[4751]: I0131 15:03:53.922642 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e91c4dc3-9319-4e4b-951a-4e1f117c3215" containerName="glance-db-sync" Jan 31 15:03:53 crc kubenswrapper[4751]: I0131 15:03:53.922805 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e91c4dc3-9319-4e4b-951a-4e1f117c3215" containerName="glance-db-sync" Jan 31 15:03:53 crc kubenswrapper[4751]: I0131 15:03:53.923523 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:53 crc kubenswrapper[4751]: I0131 15:03:53.925487 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-lgkvw" Jan 31 15:03:53 crc kubenswrapper[4751]: I0131 15:03:53.925820 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 15:03:53 crc kubenswrapper[4751]: I0131 15:03:53.925899 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Jan 31 15:03:53 crc kubenswrapper[4751]: I0131 15:03:53.935363 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.032985 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.033030 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9p949\" (UniqueName: \"kubernetes.io/projected/0afd722e-d093-428e-9f16-85a889d08de1-kube-api-access-9p949\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.033055 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-sys\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.033098 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afd722e-d093-428e-9f16-85a889d08de1-scripts\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.033126 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afd722e-d093-428e-9f16-85a889d08de1-config-data\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.033189 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0afd722e-d093-428e-9f16-85a889d08de1-httpd-run\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.033208 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-etc-nvme\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.033375 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.033422 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-lib-modules\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.033490 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afd722e-d093-428e-9f16-85a889d08de1-logs\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.033506 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.033538 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-run\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.033558 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-dev\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.033578 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.134978 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135024 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9p949\" (UniqueName: \"kubernetes.io/projected/0afd722e-d093-428e-9f16-85a889d08de1-kube-api-access-9p949\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135052 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-sys\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135100 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afd722e-d093-428e-9f16-85a889d08de1-scripts\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135126 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afd722e-d093-428e-9f16-85a889d08de1-config-data\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135188 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0afd722e-d093-428e-9f16-85a889d08de1-httpd-run\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135212 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-etc-nvme\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135230 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-sys\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135268 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135336 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-lib-modules\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135432 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afd722e-d093-428e-9f16-85a889d08de1-logs\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135459 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135455 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-etc-nvme\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135494 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-lib-modules\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135505 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-run\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135529 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-run\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135537 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-dev\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135201 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135563 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-dev\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135574 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135628 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135639 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") device mount path \"/mnt/openstack/pv16\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.135709 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") device mount path \"/mnt/openstack/pv15\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.136032 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0afd722e-d093-428e-9f16-85a889d08de1-httpd-run\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.136201 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afd722e-d093-428e-9f16-85a889d08de1-logs\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.141193 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afd722e-d093-428e-9f16-85a889d08de1-scripts\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.141671 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afd722e-d093-428e-9f16-85a889d08de1-config-data\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.158693 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.172786 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.177748 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9p949\" (UniqueName: \"kubernetes.io/projected/0afd722e-d093-428e-9f16-85a889d08de1-kube-api-access-9p949\") pod \"glance-default-single-0\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.239773 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.652726 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:03:54 crc kubenswrapper[4751]: I0131 15:03:54.720163 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:03:55 crc kubenswrapper[4751]: I0131 15:03:55.537105 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"0afd722e-d093-428e-9f16-85a889d08de1","Type":"ContainerStarted","Data":"ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee"} Jan 31 15:03:55 crc kubenswrapper[4751]: I0131 15:03:55.537657 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"0afd722e-d093-428e-9f16-85a889d08de1","Type":"ContainerStarted","Data":"982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28"} Jan 31 15:03:55 crc kubenswrapper[4751]: I0131 15:03:55.537672 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"0afd722e-d093-428e-9f16-85a889d08de1","Type":"ContainerStarted","Data":"ede3ffddd3e2cca5c5d06668cb40b6dbe9a582dbacbe5e907af617419d72cb7b"} Jan 31 15:03:55 crc kubenswrapper[4751]: I0131 15:03:55.537309 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="0afd722e-d093-428e-9f16-85a889d08de1" containerName="glance-httpd" containerID="cri-o://ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee" gracePeriod=30 Jan 31 15:03:55 crc kubenswrapper[4751]: I0131 15:03:55.537242 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="0afd722e-d093-428e-9f16-85a889d08de1" containerName="glance-log" containerID="cri-o://982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28" gracePeriod=30 Jan 31 15:03:55 crc kubenswrapper[4751]: I0131 15:03:55.569416 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=2.569394978 podStartE2EDuration="2.569394978s" podCreationTimestamp="2026-01-31 15:03:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:55.56605271 +0000 UTC m=+1339.940765605" watchObservedRunningTime="2026-01-31 15:03:55.569394978 +0000 UTC m=+1339.944107883" Jan 31 15:03:55 crc kubenswrapper[4751]: I0131 15:03:55.931733 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.062351 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-var-locks-brick\") pod \"0afd722e-d093-428e-9f16-85a889d08de1\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.062666 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-run\") pod \"0afd722e-d093-428e-9f16-85a889d08de1\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.062720 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9p949\" (UniqueName: \"kubernetes.io/projected/0afd722e-d093-428e-9f16-85a889d08de1-kube-api-access-9p949\") pod \"0afd722e-d093-428e-9f16-85a889d08de1\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.062455 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "0afd722e-d093-428e-9f16-85a889d08de1" (UID: "0afd722e-d093-428e-9f16-85a889d08de1"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.062751 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afd722e-d093-428e-9f16-85a889d08de1-config-data\") pod \"0afd722e-d093-428e-9f16-85a889d08de1\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.062771 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afd722e-d093-428e-9f16-85a889d08de1-scripts\") pod \"0afd722e-d093-428e-9f16-85a889d08de1\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.062775 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-run" (OuterVolumeSpecName: "run") pod "0afd722e-d093-428e-9f16-85a889d08de1" (UID: "0afd722e-d093-428e-9f16-85a889d08de1"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.062839 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-dev\") pod \"0afd722e-d093-428e-9f16-85a889d08de1\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.062878 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"0afd722e-d093-428e-9f16-85a889d08de1\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.062898 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-lib-modules\") pod \"0afd722e-d093-428e-9f16-85a889d08de1\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.062930 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afd722e-d093-428e-9f16-85a889d08de1-logs\") pod \"0afd722e-d093-428e-9f16-85a889d08de1\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.062944 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-sys\") pod \"0afd722e-d093-428e-9f16-85a889d08de1\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.062959 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-etc-iscsi\") pod \"0afd722e-d093-428e-9f16-85a889d08de1\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.062975 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0afd722e-d093-428e-9f16-85a889d08de1-httpd-run\") pod \"0afd722e-d093-428e-9f16-85a889d08de1\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.063022 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-etc-nvme\") pod \"0afd722e-d093-428e-9f16-85a889d08de1\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.063036 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"0afd722e-d093-428e-9f16-85a889d08de1\" (UID: \"0afd722e-d093-428e-9f16-85a889d08de1\") " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.063366 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.063389 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.063428 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "0afd722e-d093-428e-9f16-85a889d08de1" (UID: "0afd722e-d093-428e-9f16-85a889d08de1"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.063474 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "0afd722e-d093-428e-9f16-85a889d08de1" (UID: "0afd722e-d093-428e-9f16-85a889d08de1"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.063535 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-sys" (OuterVolumeSpecName: "sys") pod "0afd722e-d093-428e-9f16-85a889d08de1" (UID: "0afd722e-d093-428e-9f16-85a889d08de1"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.063579 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "0afd722e-d093-428e-9f16-85a889d08de1" (UID: "0afd722e-d093-428e-9f16-85a889d08de1"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.063585 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-dev" (OuterVolumeSpecName: "dev") pod "0afd722e-d093-428e-9f16-85a889d08de1" (UID: "0afd722e-d093-428e-9f16-85a889d08de1"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.063684 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0afd722e-d093-428e-9f16-85a889d08de1-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "0afd722e-d093-428e-9f16-85a889d08de1" (UID: "0afd722e-d093-428e-9f16-85a889d08de1"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.064011 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0afd722e-d093-428e-9f16-85a889d08de1-logs" (OuterVolumeSpecName: "logs") pod "0afd722e-d093-428e-9f16-85a889d08de1" (UID: "0afd722e-d093-428e-9f16-85a889d08de1"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.068248 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage16-crc" (OuterVolumeSpecName: "glance") pod "0afd722e-d093-428e-9f16-85a889d08de1" (UID: "0afd722e-d093-428e-9f16-85a889d08de1"). InnerVolumeSpecName "local-storage16-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.068878 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage15-crc" (OuterVolumeSpecName: "glance-cache") pod "0afd722e-d093-428e-9f16-85a889d08de1" (UID: "0afd722e-d093-428e-9f16-85a889d08de1"). InnerVolumeSpecName "local-storage15-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.069975 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0afd722e-d093-428e-9f16-85a889d08de1-kube-api-access-9p949" (OuterVolumeSpecName: "kube-api-access-9p949") pod "0afd722e-d093-428e-9f16-85a889d08de1" (UID: "0afd722e-d093-428e-9f16-85a889d08de1"). InnerVolumeSpecName "kube-api-access-9p949". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.070358 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afd722e-d093-428e-9f16-85a889d08de1-scripts" (OuterVolumeSpecName: "scripts") pod "0afd722e-d093-428e-9f16-85a889d08de1" (UID: "0afd722e-d093-428e-9f16-85a889d08de1"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.109438 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0afd722e-d093-428e-9f16-85a889d08de1-config-data" (OuterVolumeSpecName: "config-data") pod "0afd722e-d093-428e-9f16-85a889d08de1" (UID: "0afd722e-d093-428e-9f16-85a889d08de1"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.165034 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/0afd722e-d093-428e-9f16-85a889d08de1-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.165167 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/0afd722e-d093-428e-9f16-85a889d08de1-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.165234 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.165314 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.165386 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.165479 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/0afd722e-d093-428e-9f16-85a889d08de1-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.165538 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.165591 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.165648 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/0afd722e-d093-428e-9f16-85a889d08de1-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.165702 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/0afd722e-d093-428e-9f16-85a889d08de1-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.165769 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" " Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.165831 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9p949\" (UniqueName: \"kubernetes.io/projected/0afd722e-d093-428e-9f16-85a889d08de1-kube-api-access-9p949\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.178624 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage16-crc" (UniqueName: "kubernetes.io/local-volume/local-storage16-crc") on node "crc" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.180587 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage15-crc" (UniqueName: "kubernetes.io/local-volume/local-storage15-crc") on node "crc" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.268458 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.268494 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.545443 4751 generic.go:334] "Generic (PLEG): container finished" podID="0afd722e-d093-428e-9f16-85a889d08de1" containerID="ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee" exitCode=143 Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.545470 4751 generic.go:334] "Generic (PLEG): container finished" podID="0afd722e-d093-428e-9f16-85a889d08de1" containerID="982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28" exitCode=143 Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.545491 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"0afd722e-d093-428e-9f16-85a889d08de1","Type":"ContainerDied","Data":"ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee"} Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.545515 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.545536 4751 scope.go:117] "RemoveContainer" containerID="ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.545516 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"0afd722e-d093-428e-9f16-85a889d08de1","Type":"ContainerDied","Data":"982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28"} Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.545677 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"0afd722e-d093-428e-9f16-85a889d08de1","Type":"ContainerDied","Data":"ede3ffddd3e2cca5c5d06668cb40b6dbe9a582dbacbe5e907af617419d72cb7b"} Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.567348 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.575619 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.588684 4751 scope.go:117] "RemoveContainer" containerID="982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.597039 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:03:56 crc kubenswrapper[4751]: E0131 15:03:56.597376 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0afd722e-d093-428e-9f16-85a889d08de1" containerName="glance-httpd" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.597394 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0afd722e-d093-428e-9f16-85a889d08de1" containerName="glance-httpd" Jan 31 15:03:56 crc kubenswrapper[4751]: E0131 15:03:56.597411 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0afd722e-d093-428e-9f16-85a889d08de1" containerName="glance-log" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.597416 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="0afd722e-d093-428e-9f16-85a889d08de1" containerName="glance-log" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.597554 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0afd722e-d093-428e-9f16-85a889d08de1" containerName="glance-httpd" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.597570 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="0afd722e-d093-428e-9f16-85a889d08de1" containerName="glance-log" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.598361 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.604235 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.604336 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-lgkvw" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.604620 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-single-config-data" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.616582 4751 scope.go:117] "RemoveContainer" containerID="ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.620494 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:03:56 crc kubenswrapper[4751]: E0131 15:03:56.628742 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee\": container with ID starting with ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee not found: ID does not exist" containerID="ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.628789 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee"} err="failed to get container status \"ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee\": rpc error: code = NotFound desc = could not find container \"ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee\": container with ID starting with ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee not found: ID does not exist" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.628814 4751 scope.go:117] "RemoveContainer" containerID="982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28" Jan 31 15:03:56 crc kubenswrapper[4751]: E0131 15:03:56.629195 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28\": container with ID starting with 982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28 not found: ID does not exist" containerID="982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.629232 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28"} err="failed to get container status \"982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28\": rpc error: code = NotFound desc = could not find container \"982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28\": container with ID starting with 982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28 not found: ID does not exist" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.629247 4751 scope.go:117] "RemoveContainer" containerID="ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.629639 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee"} err="failed to get container status \"ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee\": rpc error: code = NotFound desc = could not find container \"ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee\": container with ID starting with ad052eff92c8d2861fc114164e87006f55dc34b295c394c1d33b1cf962ac16ee not found: ID does not exist" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.629679 4751 scope.go:117] "RemoveContainer" containerID="982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.630814 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28"} err="failed to get container status \"982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28\": rpc error: code = NotFound desc = could not find container \"982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28\": container with ID starting with 982d4d2fcf6c2bcc38cabd751f3c1dc69dd5d9aa18ff83f7b91bae8effb85e28 not found: ID does not exist" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.673642 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-scripts\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.673686 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.673708 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-etc-nvme\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.673743 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.673763 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.673786 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-config-data\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.673801 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.673818 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-dev\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.673856 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-sys\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.673876 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-lib-modules\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.673893 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-httpd-run\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.673917 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-run\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.673969 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-logs\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.674093 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hwqz\" (UniqueName: \"kubernetes.io/projected/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-kube-api-access-7hwqz\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.775965 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-sys\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.776367 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-lib-modules\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.776390 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-httpd-run\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.776429 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-run\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.776459 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-logs\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.776494 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hwqz\" (UniqueName: \"kubernetes.io/projected/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-kube-api-access-7hwqz\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.776516 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-scripts\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.776579 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.776601 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-etc-nvme\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.776652 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.776678 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.776713 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-config-data\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.776737 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.776769 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-dev\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.777065 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-dev\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.777134 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-sys\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.777162 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-lib-modules\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.777624 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-httpd-run\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.777665 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-run\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.777924 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-logs\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.778286 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-etc-nvme\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.778374 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") device mount path \"/mnt/openstack/pv15\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.778406 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") device mount path \"/mnt/openstack/pv16\"" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.778799 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-etc-iscsi\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.778927 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-var-locks-brick\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.786412 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-config-data\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.797238 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.798318 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-scripts\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.798397 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hwqz\" (UniqueName: \"kubernetes.io/projected/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-kube-api-access-7hwqz\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.798696 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-single-0\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:56 crc kubenswrapper[4751]: I0131 15:03:56.917740 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:03:57 crc kubenswrapper[4751]: I0131 15:03:57.333102 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:03:57 crc kubenswrapper[4751]: I0131 15:03:57.556835 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d","Type":"ContainerStarted","Data":"9f3ae92fe0ff829368293b9ad1323f714b2c1d0600b09baf58692dca7948e19c"} Jan 31 15:03:57 crc kubenswrapper[4751]: I0131 15:03:57.556873 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d","Type":"ContainerStarted","Data":"3752a1af33745a097b553c49a65db992ed225f5133c4335738998e6644147a8d"} Jan 31 15:03:58 crc kubenswrapper[4751]: I0131 15:03:58.429676 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0afd722e-d093-428e-9f16-85a889d08de1" path="/var/lib/kubelet/pods/0afd722e-d093-428e-9f16-85a889d08de1/volumes" Jan 31 15:03:58 crc kubenswrapper[4751]: I0131 15:03:58.569111 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d","Type":"ContainerStarted","Data":"028ae3739718b4f9fa1406588bccd6e53a5c4051e633a6571cc7446d59642b17"} Jan 31 15:03:58 crc kubenswrapper[4751]: I0131 15:03:58.598257 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-0" podStartSLOduration=2.598236099 podStartE2EDuration="2.598236099s" podCreationTimestamp="2026-01-31 15:03:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:03:58.589893369 +0000 UTC m=+1342.964606294" watchObservedRunningTime="2026-01-31 15:03:58.598236099 +0000 UTC m=+1342.972948994" Jan 31 15:04:06 crc kubenswrapper[4751]: I0131 15:04:06.918439 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:04:06 crc kubenswrapper[4751]: I0131 15:04:06.918865 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:04:06 crc kubenswrapper[4751]: I0131 15:04:06.950555 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:04:06 crc kubenswrapper[4751]: I0131 15:04:06.974715 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:04:07 crc kubenswrapper[4751]: I0131 15:04:07.643058 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:04:07 crc kubenswrapper[4751]: I0131 15:04:07.643132 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:04:09 crc kubenswrapper[4751]: I0131 15:04:09.713011 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:04:09 crc kubenswrapper[4751]: I0131 15:04:09.713864 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 15:04:09 crc kubenswrapper[4751]: I0131 15:04:09.717270 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.423309 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.427434 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.462666 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.470042 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.491601 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.503202 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.515146 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q42nd\" (UniqueName: \"kubernetes.io/projected/34a7875b-0d63-43d7-9833-07b4ddc85ff6-kube-api-access-q42nd\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.515194 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34a7875b-0d63-43d7-9833-07b4ddc85ff6-httpd-run\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.515287 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.515329 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.515347 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a7875b-0d63-43d7-9833-07b4ddc85ff6-config-data\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.515374 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-run\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.515396 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a7875b-0d63-43d7-9833-07b4ddc85ff6-logs\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.515422 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-var-locks-brick\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.515448 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-etc-nvme\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.515471 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a7875b-0d63-43d7-9833-07b4ddc85ff6-scripts\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.515487 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-sys\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.515502 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-lib-modules\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.515520 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-dev\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.515534 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-etc-iscsi\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.616804 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.616851 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.616879 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.616910 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-httpd-run\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.616928 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-sys\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.616946 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.616965 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a7875b-0d63-43d7-9833-07b4ddc85ff6-config-data\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.616991 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617007 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-run\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617041 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-run\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617108 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-etc-nvme\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617193 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-dev\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617224 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a7875b-0d63-43d7-9833-07b4ddc85ff6-logs\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617302 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-var-locks-brick\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617365 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-var-locks-brick\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617396 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") device mount path \"/mnt/openstack/pv11\"" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617416 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") device mount path \"/mnt/openstack/pv01\"" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617609 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-etc-nvme\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617644 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-lib-modules\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617664 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617680 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a7875b-0d63-43d7-9833-07b4ddc85ff6-scripts\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617698 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pqz7z\" (UniqueName: \"kubernetes.io/projected/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-kube-api-access-pqz7z\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617709 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a7875b-0d63-43d7-9833-07b4ddc85ff6-logs\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617717 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-sys\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617745 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-sys\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617766 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-lib-modules\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617869 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-etc-nvme\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617745 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-lib-modules\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617910 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-dev\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617926 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-etc-iscsi\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617941 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-config-data\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617962 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-scripts\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.617982 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-logs\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.618000 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-run\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.618016 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q42nd\" (UniqueName: \"kubernetes.io/projected/34a7875b-0d63-43d7-9833-07b4ddc85ff6-kube-api-access-q42nd\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.618033 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34a7875b-0d63-43d7-9833-07b4ddc85ff6-httpd-run\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.618200 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-dev\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.618233 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-etc-iscsi\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.618296 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34a7875b-0d63-43d7-9833-07b4ddc85ff6-httpd-run\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.625402 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a7875b-0d63-43d7-9833-07b4ddc85ff6-config-data\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.626222 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a7875b-0d63-43d7-9833-07b4ddc85ff6-scripts\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.634436 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q42nd\" (UniqueName: \"kubernetes.io/projected/34a7875b-0d63-43d7-9833-07b4ddc85ff6-kube-api-access-q42nd\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.637852 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.648225 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"glance-default-single-2\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719344 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719388 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-etc-nvme\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719418 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-dev\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719465 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-lib-modules\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719489 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719510 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pqz7z\" (UniqueName: \"kubernetes.io/projected/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-kube-api-access-pqz7z\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719545 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-config-data\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719572 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-scripts\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719579 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-lib-modules\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719611 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-logs\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719637 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-run\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719665 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-dev\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719675 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719706 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719730 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") device mount path \"/mnt/openstack/pv20\"" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719748 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-httpd-run\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719770 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-sys\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719855 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-sys\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.720195 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-etc-iscsi\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.720278 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-run\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719641 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-etc-nvme\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.719542 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") device mount path \"/mnt/openstack/pv17\"" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.720305 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-var-locks-brick\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.720648 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-logs\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.720837 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-httpd-run\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.723246 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-scripts\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.724316 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-config-data\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.735798 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pqz7z\" (UniqueName: \"kubernetes.io/projected/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-kube-api-access-pqz7z\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.739418 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.740057 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"glance-default-single-1\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.780651 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:12 crc kubenswrapper[4751]: I0131 15:04:12.804027 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:13 crc kubenswrapper[4751]: I0131 15:04:13.204927 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Jan 31 15:04:13 crc kubenswrapper[4751]: I0131 15:04:13.257827 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 15:04:13 crc kubenswrapper[4751]: W0131 15:04:13.330807 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2dfb5a9a_fa92_40c6_84ab_7b6b081cc688.slice/crio-402147d966fca3f55a794a137f4a6b76809d5828d0e869e767465f78800c790f WatchSource:0}: Error finding container 402147d966fca3f55a794a137f4a6b76809d5828d0e869e767465f78800c790f: Status 404 returned error can't find the container with id 402147d966fca3f55a794a137f4a6b76809d5828d0e869e767465f78800c790f Jan 31 15:04:13 crc kubenswrapper[4751]: I0131 15:04:13.692711 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688","Type":"ContainerStarted","Data":"4fbc4ee6ad6c761a5b5a51163a560ee167d3f98243bc794a7addb24ce85b940b"} Jan 31 15:04:13 crc kubenswrapper[4751]: I0131 15:04:13.693230 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688","Type":"ContainerStarted","Data":"645226e20597fef9cf7c812cae862c4ac3ab03116b7fb8b29ff6d081dd574d5f"} Jan 31 15:04:13 crc kubenswrapper[4751]: I0131 15:04:13.693249 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688","Type":"ContainerStarted","Data":"402147d966fca3f55a794a137f4a6b76809d5828d0e869e767465f78800c790f"} Jan 31 15:04:13 crc kubenswrapper[4751]: I0131 15:04:13.694935 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"34a7875b-0d63-43d7-9833-07b4ddc85ff6","Type":"ContainerStarted","Data":"b5eadfa9c57b278838c0ae7db714f9a98b67d467f6d66b571b2645118f81fc0e"} Jan 31 15:04:13 crc kubenswrapper[4751]: I0131 15:04:13.694966 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"34a7875b-0d63-43d7-9833-07b4ddc85ff6","Type":"ContainerStarted","Data":"619f96ea7220c280d8a22355d64a0c2812492ed032b5d0869b1e6b943637206b"} Jan 31 15:04:13 crc kubenswrapper[4751]: I0131 15:04:13.694980 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"34a7875b-0d63-43d7-9833-07b4ddc85ff6","Type":"ContainerStarted","Data":"072c566469ce5399f0b0c6f71e56fe648451b73be39d67d13b4a1e8f7f64cd97"} Jan 31 15:04:13 crc kubenswrapper[4751]: I0131 15:04:13.719609 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-1" podStartSLOduration=2.719584883 podStartE2EDuration="2.719584883s" podCreationTimestamp="2026-01-31 15:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:04:13.712833705 +0000 UTC m=+1358.087546580" watchObservedRunningTime="2026-01-31 15:04:13.719584883 +0000 UTC m=+1358.094297778" Jan 31 15:04:13 crc kubenswrapper[4751]: I0131 15:04:13.739600 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-single-2" podStartSLOduration=2.739577731 podStartE2EDuration="2.739577731s" podCreationTimestamp="2026-01-31 15:04:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:04:13.733042728 +0000 UTC m=+1358.107755633" watchObservedRunningTime="2026-01-31 15:04:13.739577731 +0000 UTC m=+1358.114290656" Jan 31 15:04:22 crc kubenswrapper[4751]: I0131 15:04:22.781603 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:22 crc kubenswrapper[4751]: I0131 15:04:22.784319 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:22 crc kubenswrapper[4751]: I0131 15:04:22.804912 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:22 crc kubenswrapper[4751]: I0131 15:04:22.805117 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:22 crc kubenswrapper[4751]: I0131 15:04:22.810129 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:22 crc kubenswrapper[4751]: I0131 15:04:22.830727 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:22 crc kubenswrapper[4751]: I0131 15:04:22.841934 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:22 crc kubenswrapper[4751]: I0131 15:04:22.842257 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:23 crc kubenswrapper[4751]: I0131 15:04:23.793260 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:23 crc kubenswrapper[4751]: I0131 15:04:23.793331 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:23 crc kubenswrapper[4751]: I0131 15:04:23.793354 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:23 crc kubenswrapper[4751]: I0131 15:04:23.793374 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:25 crc kubenswrapper[4751]: I0131 15:04:25.737765 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:25 crc kubenswrapper[4751]: I0131 15:04:25.812040 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:25 crc kubenswrapper[4751]: I0131 15:04:25.881242 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:25 crc kubenswrapper[4751]: I0131 15:04:25.881390 4751 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 31 15:04:25 crc kubenswrapper[4751]: I0131 15:04:25.895848 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:26 crc kubenswrapper[4751]: I0131 15:04:26.892737 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Jan 31 15:04:26 crc kubenswrapper[4751]: I0131 15:04:26.908040 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 15:04:27 crc kubenswrapper[4751]: I0131 15:04:27.828745 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" containerName="glance-log" containerID="cri-o://645226e20597fef9cf7c812cae862c4ac3ab03116b7fb8b29ff6d081dd574d5f" gracePeriod=30 Jan 31 15:04:27 crc kubenswrapper[4751]: I0131 15:04:27.828996 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-2" podUID="34a7875b-0d63-43d7-9833-07b4ddc85ff6" containerName="glance-httpd" containerID="cri-o://b5eadfa9c57b278838c0ae7db714f9a98b67d467f6d66b571b2645118f81fc0e" gracePeriod=30 Jan 31 15:04:27 crc kubenswrapper[4751]: I0131 15:04:27.828900 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-1" podUID="2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" containerName="glance-httpd" containerID="cri-o://4fbc4ee6ad6c761a5b5a51163a560ee167d3f98243bc794a7addb24ce85b940b" gracePeriod=30 Jan 31 15:04:27 crc kubenswrapper[4751]: I0131 15:04:27.828962 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-2" podUID="34a7875b-0d63-43d7-9833-07b4ddc85ff6" containerName="glance-log" containerID="cri-o://619f96ea7220c280d8a22355d64a0c2812492ed032b5d0869b1e6b943637206b" gracePeriod=30 Jan 31 15:04:27 crc kubenswrapper[4751]: I0131 15:04:27.835932 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/glance-default-single-1" podUID="2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" containerName="glance-httpd" probeResult="failure" output="Get \"http://10.217.0.140:9292/healthcheck\": EOF" Jan 31 15:04:28 crc kubenswrapper[4751]: I0131 15:04:28.839839 4751 generic.go:334] "Generic (PLEG): container finished" podID="2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" containerID="645226e20597fef9cf7c812cae862c4ac3ab03116b7fb8b29ff6d081dd574d5f" exitCode=143 Jan 31 15:04:28 crc kubenswrapper[4751]: I0131 15:04:28.839939 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688","Type":"ContainerDied","Data":"645226e20597fef9cf7c812cae862c4ac3ab03116b7fb8b29ff6d081dd574d5f"} Jan 31 15:04:28 crc kubenswrapper[4751]: I0131 15:04:28.843897 4751 generic.go:334] "Generic (PLEG): container finished" podID="34a7875b-0d63-43d7-9833-07b4ddc85ff6" containerID="619f96ea7220c280d8a22355d64a0c2812492ed032b5d0869b1e6b943637206b" exitCode=143 Jan 31 15:04:28 crc kubenswrapper[4751]: I0131 15:04:28.843946 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"34a7875b-0d63-43d7-9833-07b4ddc85ff6","Type":"ContainerDied","Data":"619f96ea7220c280d8a22355d64a0c2812492ed032b5d0869b1e6b943637206b"} Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.389325 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.462338 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-lib-modules\") pod \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.462403 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a7875b-0d63-43d7-9833-07b4ddc85ff6-config-data\") pod \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.462434 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") pod \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.462454 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a7875b-0d63-43d7-9833-07b4ddc85ff6-logs\") pod \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.462500 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-run\") pod \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.462519 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a7875b-0d63-43d7-9833-07b4ddc85ff6-scripts\") pod \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.462565 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q42nd\" (UniqueName: \"kubernetes.io/projected/34a7875b-0d63-43d7-9833-07b4ddc85ff6-kube-api-access-q42nd\") pod \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.462597 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-etc-nvme\") pod \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.462615 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") pod \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.462658 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-dev\") pod \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.462890 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34a7875b-0d63-43d7-9833-07b4ddc85ff6-httpd-run\") pod \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.462932 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-var-locks-brick\") pod \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.462955 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-sys\") pod \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.462977 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-etc-iscsi\") pod \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\" (UID: \"34a7875b-0d63-43d7-9833-07b4ddc85ff6\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.462982 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "34a7875b-0d63-43d7-9833-07b4ddc85ff6" (UID: "34a7875b-0d63-43d7-9833-07b4ddc85ff6"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.463122 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "34a7875b-0d63-43d7-9833-07b4ddc85ff6" (UID: "34a7875b-0d63-43d7-9833-07b4ddc85ff6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.463217 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "34a7875b-0d63-43d7-9833-07b4ddc85ff6" (UID: "34a7875b-0d63-43d7-9833-07b4ddc85ff6"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.463285 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-run" (OuterVolumeSpecName: "run") pod "34a7875b-0d63-43d7-9833-07b4ddc85ff6" (UID: "34a7875b-0d63-43d7-9833-07b4ddc85ff6"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.463965 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.467270 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.467284 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.463399 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34a7875b-0d63-43d7-9833-07b4ddc85ff6-logs" (OuterVolumeSpecName: "logs") pod "34a7875b-0d63-43d7-9833-07b4ddc85ff6" (UID: "34a7875b-0d63-43d7-9833-07b4ddc85ff6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.463635 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34a7875b-0d63-43d7-9833-07b4ddc85ff6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "34a7875b-0d63-43d7-9833-07b4ddc85ff6" (UID: "34a7875b-0d63-43d7-9833-07b4ddc85ff6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.463654 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-dev" (OuterVolumeSpecName: "dev") pod "34a7875b-0d63-43d7-9833-07b4ddc85ff6" (UID: "34a7875b-0d63-43d7-9833-07b4ddc85ff6"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.463675 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "34a7875b-0d63-43d7-9833-07b4ddc85ff6" (UID: "34a7875b-0d63-43d7-9833-07b4ddc85ff6"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.463693 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-sys" (OuterVolumeSpecName: "sys") pod "34a7875b-0d63-43d7-9833-07b4ddc85ff6" (UID: "34a7875b-0d63-43d7-9833-07b4ddc85ff6"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.468464 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage01-crc" (OuterVolumeSpecName: "glance") pod "34a7875b-0d63-43d7-9833-07b4ddc85ff6" (UID: "34a7875b-0d63-43d7-9833-07b4ddc85ff6"). InnerVolumeSpecName "local-storage01-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.470694 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34a7875b-0d63-43d7-9833-07b4ddc85ff6-kube-api-access-q42nd" (OuterVolumeSpecName: "kube-api-access-q42nd") pod "34a7875b-0d63-43d7-9833-07b4ddc85ff6" (UID: "34a7875b-0d63-43d7-9833-07b4ddc85ff6"). InnerVolumeSpecName "kube-api-access-q42nd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.471398 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage11-crc" (OuterVolumeSpecName: "glance-cache") pod "34a7875b-0d63-43d7-9833-07b4ddc85ff6" (UID: "34a7875b-0d63-43d7-9833-07b4ddc85ff6"). InnerVolumeSpecName "local-storage11-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.474836 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a7875b-0d63-43d7-9833-07b4ddc85ff6-scripts" (OuterVolumeSpecName: "scripts") pod "34a7875b-0d63-43d7-9833-07b4ddc85ff6" (UID: "34a7875b-0d63-43d7-9833-07b4ddc85ff6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.508452 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.511646 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34a7875b-0d63-43d7-9833-07b4ddc85ff6-config-data" (OuterVolumeSpecName: "config-data") pod "34a7875b-0d63-43d7-9833-07b4ddc85ff6" (UID: "34a7875b-0d63-43d7-9833-07b4ddc85ff6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568085 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") pod \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568140 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-run\") pod \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568181 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-var-locks-brick\") pod \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568214 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568246 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-config-data\") pod \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568263 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-etc-iscsi\") pod \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568315 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-dev\") pod \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568352 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-httpd-run\") pod \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568375 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-logs\") pod \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568423 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pqz7z\" (UniqueName: \"kubernetes.io/projected/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-kube-api-access-pqz7z\") pod \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568451 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-etc-nvme\") pod \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568469 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-sys\") pod \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568499 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-lib-modules\") pod \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568526 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-scripts\") pod \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\" (UID: \"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688\") " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568825 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568847 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/34a7875b-0d63-43d7-9833-07b4ddc85ff6-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568856 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568865 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/34a7875b-0d63-43d7-9833-07b4ddc85ff6-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568873 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-q42nd\" (UniqueName: \"kubernetes.io/projected/34a7875b-0d63-43d7-9833-07b4ddc85ff6-kube-api-access-q42nd\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568889 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568899 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568906 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/34a7875b-0d63-43d7-9833-07b4ddc85ff6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568914 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568923 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/34a7875b-0d63-43d7-9833-07b4ddc85ff6-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.568931 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/34a7875b-0d63-43d7-9833-07b4ddc85ff6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.571493 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage17-crc" (OuterVolumeSpecName: "glance-cache") pod "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" (UID: "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688"). InnerVolumeSpecName "local-storage17-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.571561 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-run" (OuterVolumeSpecName: "run") pod "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" (UID: "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.571590 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" (UID: "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.571618 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" (UID: "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.571643 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" (UID: "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.571667 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-sys" (OuterVolumeSpecName: "sys") pod "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" (UID: "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.572042 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage20-crc" (OuterVolumeSpecName: "glance") pod "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" (UID: "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688"). InnerVolumeSpecName "local-storage20-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.572907 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-dev" (OuterVolumeSpecName: "dev") pod "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" (UID: "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.573236 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" (UID: "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.573529 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" (UID: "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.573583 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-logs" (OuterVolumeSpecName: "logs") pod "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" (UID: "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.574015 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-kube-api-access-pqz7z" (OuterVolumeSpecName: "kube-api-access-pqz7z") pod "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" (UID: "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688"). InnerVolumeSpecName "kube-api-access-pqz7z". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.575038 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-scripts" (OuterVolumeSpecName: "scripts") pod "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" (UID: "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.583943 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage01-crc" (UniqueName: "kubernetes.io/local-volume/local-storage01-crc") on node "crc" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.587186 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage11-crc" (UniqueName: "kubernetes.io/local-volume/local-storage11-crc") on node "crc" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.603739 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-config-data" (OuterVolumeSpecName: "config-data") pod "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" (UID: "2dfb5a9a-fa92-40c6-84ab-7b6b081cc688"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670118 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670150 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage01-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage01-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670184 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670192 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670201 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670213 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670221 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670231 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670239 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pqz7z\" (UniqueName: \"kubernetes.io/projected/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-kube-api-access-pqz7z\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670248 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670256 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670264 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage11-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage11-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670271 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670279 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670300 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") on node \"crc\" " Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.670308 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.682776 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage20-crc" (UniqueName: "kubernetes.io/local-volume/local-storage20-crc") on node "crc" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.683329 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage17-crc" (UniqueName: "kubernetes.io/local-volume/local-storage17-crc") on node "crc" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.772391 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage20-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage20-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.772754 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.887861 4751 generic.go:334] "Generic (PLEG): container finished" podID="2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" containerID="4fbc4ee6ad6c761a5b5a51163a560ee167d3f98243bc794a7addb24ce85b940b" exitCode=0 Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.887920 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688","Type":"ContainerDied","Data":"4fbc4ee6ad6c761a5b5a51163a560ee167d3f98243bc794a7addb24ce85b940b"} Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.887944 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-1" event={"ID":"2dfb5a9a-fa92-40c6-84ab-7b6b081cc688","Type":"ContainerDied","Data":"402147d966fca3f55a794a137f4a6b76809d5828d0e869e767465f78800c790f"} Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.887960 4751 scope.go:117] "RemoveContainer" containerID="4fbc4ee6ad6c761a5b5a51163a560ee167d3f98243bc794a7addb24ce85b940b" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.888050 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-1" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.891838 4751 generic.go:334] "Generic (PLEG): container finished" podID="34a7875b-0d63-43d7-9833-07b4ddc85ff6" containerID="b5eadfa9c57b278838c0ae7db714f9a98b67d467f6d66b571b2645118f81fc0e" exitCode=0 Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.891896 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"34a7875b-0d63-43d7-9833-07b4ddc85ff6","Type":"ContainerDied","Data":"b5eadfa9c57b278838c0ae7db714f9a98b67d467f6d66b571b2645118f81fc0e"} Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.891947 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-2" event={"ID":"34a7875b-0d63-43d7-9833-07b4ddc85ff6","Type":"ContainerDied","Data":"072c566469ce5399f0b0c6f71e56fe648451b73be39d67d13b4a1e8f7f64cd97"} Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.891879 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-2" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.912241 4751 scope.go:117] "RemoveContainer" containerID="645226e20597fef9cf7c812cae862c4ac3ab03116b7fb8b29ff6d081dd574d5f" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.929704 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.939881 4751 scope.go:117] "RemoveContainer" containerID="4fbc4ee6ad6c761a5b5a51163a560ee167d3f98243bc794a7addb24ce85b940b" Jan 31 15:04:31 crc kubenswrapper[4751]: E0131 15:04:31.940397 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4fbc4ee6ad6c761a5b5a51163a560ee167d3f98243bc794a7addb24ce85b940b\": container with ID starting with 4fbc4ee6ad6c761a5b5a51163a560ee167d3f98243bc794a7addb24ce85b940b not found: ID does not exist" containerID="4fbc4ee6ad6c761a5b5a51163a560ee167d3f98243bc794a7addb24ce85b940b" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.940435 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4fbc4ee6ad6c761a5b5a51163a560ee167d3f98243bc794a7addb24ce85b940b"} err="failed to get container status \"4fbc4ee6ad6c761a5b5a51163a560ee167d3f98243bc794a7addb24ce85b940b\": rpc error: code = NotFound desc = could not find container \"4fbc4ee6ad6c761a5b5a51163a560ee167d3f98243bc794a7addb24ce85b940b\": container with ID starting with 4fbc4ee6ad6c761a5b5a51163a560ee167d3f98243bc794a7addb24ce85b940b not found: ID does not exist" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.940462 4751 scope.go:117] "RemoveContainer" containerID="645226e20597fef9cf7c812cae862c4ac3ab03116b7fb8b29ff6d081dd574d5f" Jan 31 15:04:31 crc kubenswrapper[4751]: E0131 15:04:31.940899 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"645226e20597fef9cf7c812cae862c4ac3ab03116b7fb8b29ff6d081dd574d5f\": container with ID starting with 645226e20597fef9cf7c812cae862c4ac3ab03116b7fb8b29ff6d081dd574d5f not found: ID does not exist" containerID="645226e20597fef9cf7c812cae862c4ac3ab03116b7fb8b29ff6d081dd574d5f" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.940943 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"645226e20597fef9cf7c812cae862c4ac3ab03116b7fb8b29ff6d081dd574d5f"} err="failed to get container status \"645226e20597fef9cf7c812cae862c4ac3ab03116b7fb8b29ff6d081dd574d5f\": rpc error: code = NotFound desc = could not find container \"645226e20597fef9cf7c812cae862c4ac3ab03116b7fb8b29ff6d081dd574d5f\": container with ID starting with 645226e20597fef9cf7c812cae862c4ac3ab03116b7fb8b29ff6d081dd574d5f not found: ID does not exist" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.940978 4751 scope.go:117] "RemoveContainer" containerID="b5eadfa9c57b278838c0ae7db714f9a98b67d467f6d66b571b2645118f81fc0e" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.946272 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-1"] Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.956525 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.962299 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-2"] Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.965109 4751 scope.go:117] "RemoveContainer" containerID="619f96ea7220c280d8a22355d64a0c2812492ed032b5d0869b1e6b943637206b" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.982878 4751 scope.go:117] "RemoveContainer" containerID="b5eadfa9c57b278838c0ae7db714f9a98b67d467f6d66b571b2645118f81fc0e" Jan 31 15:04:31 crc kubenswrapper[4751]: E0131 15:04:31.983400 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b5eadfa9c57b278838c0ae7db714f9a98b67d467f6d66b571b2645118f81fc0e\": container with ID starting with b5eadfa9c57b278838c0ae7db714f9a98b67d467f6d66b571b2645118f81fc0e not found: ID does not exist" containerID="b5eadfa9c57b278838c0ae7db714f9a98b67d467f6d66b571b2645118f81fc0e" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.983441 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b5eadfa9c57b278838c0ae7db714f9a98b67d467f6d66b571b2645118f81fc0e"} err="failed to get container status \"b5eadfa9c57b278838c0ae7db714f9a98b67d467f6d66b571b2645118f81fc0e\": rpc error: code = NotFound desc = could not find container \"b5eadfa9c57b278838c0ae7db714f9a98b67d467f6d66b571b2645118f81fc0e\": container with ID starting with b5eadfa9c57b278838c0ae7db714f9a98b67d467f6d66b571b2645118f81fc0e not found: ID does not exist" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.983468 4751 scope.go:117] "RemoveContainer" containerID="619f96ea7220c280d8a22355d64a0c2812492ed032b5d0869b1e6b943637206b" Jan 31 15:04:31 crc kubenswrapper[4751]: E0131 15:04:31.983927 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"619f96ea7220c280d8a22355d64a0c2812492ed032b5d0869b1e6b943637206b\": container with ID starting with 619f96ea7220c280d8a22355d64a0c2812492ed032b5d0869b1e6b943637206b not found: ID does not exist" containerID="619f96ea7220c280d8a22355d64a0c2812492ed032b5d0869b1e6b943637206b" Jan 31 15:04:31 crc kubenswrapper[4751]: I0131 15:04:31.983966 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"619f96ea7220c280d8a22355d64a0c2812492ed032b5d0869b1e6b943637206b"} err="failed to get container status \"619f96ea7220c280d8a22355d64a0c2812492ed032b5d0869b1e6b943637206b\": rpc error: code = NotFound desc = could not find container \"619f96ea7220c280d8a22355d64a0c2812492ed032b5d0869b1e6b943637206b\": container with ID starting with 619f96ea7220c280d8a22355d64a0c2812492ed032b5d0869b1e6b943637206b not found: ID does not exist" Jan 31 15:04:32 crc kubenswrapper[4751]: I0131 15:04:32.423385 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" path="/var/lib/kubelet/pods/2dfb5a9a-fa92-40c6-84ab-7b6b081cc688/volumes" Jan 31 15:04:32 crc kubenswrapper[4751]: I0131 15:04:32.424758 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34a7875b-0d63-43d7-9833-07b4ddc85ff6" path="/var/lib/kubelet/pods/34a7875b-0d63-43d7-9833-07b4ddc85ff6/volumes" Jan 31 15:04:33 crc kubenswrapper[4751]: I0131 15:04:33.204827 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:04:33 crc kubenswrapper[4751]: I0131 15:04:33.205364 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" containerName="glance-log" containerID="cri-o://9f3ae92fe0ff829368293b9ad1323f714b2c1d0600b09baf58692dca7948e19c" gracePeriod=30 Jan 31 15:04:33 crc kubenswrapper[4751]: I0131 15:04:33.205434 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-single-0" podUID="46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" containerName="glance-httpd" containerID="cri-o://028ae3739718b4f9fa1406588bccd6e53a5c4051e633a6571cc7446d59642b17" gracePeriod=30 Jan 31 15:04:33 crc kubenswrapper[4751]: I0131 15:04:33.924150 4751 generic.go:334] "Generic (PLEG): container finished" podID="46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" containerID="9f3ae92fe0ff829368293b9ad1323f714b2c1d0600b09baf58692dca7948e19c" exitCode=143 Jan 31 15:04:33 crc kubenswrapper[4751]: I0131 15:04:33.924228 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d","Type":"ContainerDied","Data":"9f3ae92fe0ff829368293b9ad1323f714b2c1d0600b09baf58692dca7948e19c"} Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.741541 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.852528 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.853020 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-logs\") pod \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.853160 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-sys\") pod \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.853280 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-run\") pod \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.853415 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-httpd-run\") pod \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.853746 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.853858 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-config-data\") pod \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.853976 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-var-locks-brick\") pod \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.854196 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-dev\") pod \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.854300 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-etc-nvme\") pod \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.854408 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-etc-iscsi\") pod \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.854535 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-scripts\") pod \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.854625 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hwqz\" (UniqueName: \"kubernetes.io/projected/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-kube-api-access-7hwqz\") pod \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.854719 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-lib-modules\") pod \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\" (UID: \"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d\") " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.855282 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" (UID: "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.855398 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" (UID: "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.857382 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" (UID: "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.858490 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-sys" (OuterVolumeSpecName: "sys") pod "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" (UID: "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.858534 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-run" (OuterVolumeSpecName: "run") pod "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" (UID: "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.858821 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-logs" (OuterVolumeSpecName: "logs") pod "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" (UID: "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.858858 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" (UID: "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.858859 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" (UID: "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.858881 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-dev" (OuterVolumeSpecName: "dev") pod "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" (UID: "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.868770 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-scripts" (OuterVolumeSpecName: "scripts") pod "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" (UID: "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.871242 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage16-crc" (OuterVolumeSpecName: "glance") pod "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" (UID: "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d"). InnerVolumeSpecName "local-storage16-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.882984 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage15-crc" (OuterVolumeSpecName: "glance-cache") pod "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" (UID: "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d"). InnerVolumeSpecName "local-storage15-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.885263 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-kube-api-access-7hwqz" (OuterVolumeSpecName: "kube-api-access-7hwqz") pod "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" (UID: "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d"). InnerVolumeSpecName "kube-api-access-7hwqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.963881 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.964188 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.964198 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.964219 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.964230 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.964238 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.964246 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.964254 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.964262 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.964270 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hwqz\" (UniqueName: \"kubernetes.io/projected/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-kube-api-access-7hwqz\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.964279 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.964293 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" " Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.964301 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.977494 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-config-data" (OuterVolumeSpecName: "config-data") pod "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" (UID: "46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.980239 4751 generic.go:334] "Generic (PLEG): container finished" podID="46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" containerID="028ae3739718b4f9fa1406588bccd6e53a5c4051e633a6571cc7446d59642b17" exitCode=0 Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.980277 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d","Type":"ContainerDied","Data":"028ae3739718b4f9fa1406588bccd6e53a5c4051e633a6571cc7446d59642b17"} Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.980302 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-single-0" event={"ID":"46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d","Type":"ContainerDied","Data":"3752a1af33745a097b553c49a65db992ed225f5133c4335738998e6644147a8d"} Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.980317 4751 scope.go:117] "RemoveContainer" containerID="028ae3739718b4f9fa1406588bccd6e53a5c4051e633a6571cc7446d59642b17" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.980447 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-single-0" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.989801 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage15-crc" (UniqueName: "kubernetes.io/local-volume/local-storage15-crc") on node "crc" Jan 31 15:04:36 crc kubenswrapper[4751]: I0131 15:04:36.996617 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage16-crc" (UniqueName: "kubernetes.io/local-volume/local-storage16-crc") on node "crc" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.010543 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.016940 4751 scope.go:117] "RemoveContainer" containerID="9f3ae92fe0ff829368293b9ad1323f714b2c1d0600b09baf58692dca7948e19c" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.018956 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-single-0"] Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.046118 4751 scope.go:117] "RemoveContainer" containerID="028ae3739718b4f9fa1406588bccd6e53a5c4051e633a6571cc7446d59642b17" Jan 31 15:04:37 crc kubenswrapper[4751]: E0131 15:04:37.046679 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"028ae3739718b4f9fa1406588bccd6e53a5c4051e633a6571cc7446d59642b17\": container with ID starting with 028ae3739718b4f9fa1406588bccd6e53a5c4051e633a6571cc7446d59642b17 not found: ID does not exist" containerID="028ae3739718b4f9fa1406588bccd6e53a5c4051e633a6571cc7446d59642b17" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.047128 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"028ae3739718b4f9fa1406588bccd6e53a5c4051e633a6571cc7446d59642b17"} err="failed to get container status \"028ae3739718b4f9fa1406588bccd6e53a5c4051e633a6571cc7446d59642b17\": rpc error: code = NotFound desc = could not find container \"028ae3739718b4f9fa1406588bccd6e53a5c4051e633a6571cc7446d59642b17\": container with ID starting with 028ae3739718b4f9fa1406588bccd6e53a5c4051e633a6571cc7446d59642b17 not found: ID does not exist" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.047167 4751 scope.go:117] "RemoveContainer" containerID="9f3ae92fe0ff829368293b9ad1323f714b2c1d0600b09baf58692dca7948e19c" Jan 31 15:04:37 crc kubenswrapper[4751]: E0131 15:04:37.047405 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9f3ae92fe0ff829368293b9ad1323f714b2c1d0600b09baf58692dca7948e19c\": container with ID starting with 9f3ae92fe0ff829368293b9ad1323f714b2c1d0600b09baf58692dca7948e19c not found: ID does not exist" containerID="9f3ae92fe0ff829368293b9ad1323f714b2c1d0600b09baf58692dca7948e19c" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.047446 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9f3ae92fe0ff829368293b9ad1323f714b2c1d0600b09baf58692dca7948e19c"} err="failed to get container status \"9f3ae92fe0ff829368293b9ad1323f714b2c1d0600b09baf58692dca7948e19c\": rpc error: code = NotFound desc = could not find container \"9f3ae92fe0ff829368293b9ad1323f714b2c1d0600b09baf58692dca7948e19c\": container with ID starting with 9f3ae92fe0ff829368293b9ad1323f714b2c1d0600b09baf58692dca7948e19c not found: ID does not exist" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.066285 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.066313 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.066324 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:37 crc kubenswrapper[4751]: E0131 15:04:37.075795 4751 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod46fa6f5a_9f30_4e19_8b53_b8aa7c3a533d.slice\": RecentStats: unable to find data in memory cache]" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.569381 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-zzxfv"] Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.576134 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-zzxfv"] Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.592997 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance633a-account-delete-4gbtt"] Jan 31 15:04:37 crc kubenswrapper[4751]: E0131 15:04:37.593268 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a7875b-0d63-43d7-9833-07b4ddc85ff6" containerName="glance-httpd" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.593283 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a7875b-0d63-43d7-9833-07b4ddc85ff6" containerName="glance-httpd" Jan 31 15:04:37 crc kubenswrapper[4751]: E0131 15:04:37.593297 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" containerName="glance-log" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.593303 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" containerName="glance-log" Jan 31 15:04:37 crc kubenswrapper[4751]: E0131 15:04:37.593319 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" containerName="glance-log" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.593325 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" containerName="glance-log" Jan 31 15:04:37 crc kubenswrapper[4751]: E0131 15:04:37.593336 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" containerName="glance-httpd" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.593343 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" containerName="glance-httpd" Jan 31 15:04:37 crc kubenswrapper[4751]: E0131 15:04:37.593357 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34a7875b-0d63-43d7-9833-07b4ddc85ff6" containerName="glance-log" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.593362 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="34a7875b-0d63-43d7-9833-07b4ddc85ff6" containerName="glance-log" Jan 31 15:04:37 crc kubenswrapper[4751]: E0131 15:04:37.593374 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" containerName="glance-httpd" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.593379 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" containerName="glance-httpd" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.593500 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a7875b-0d63-43d7-9833-07b4ddc85ff6" containerName="glance-log" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.593514 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="34a7875b-0d63-43d7-9833-07b4ddc85ff6" containerName="glance-httpd" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.593522 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" containerName="glance-log" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.593530 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" containerName="glance-httpd" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.593537 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" containerName="glance-log" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.593548 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2dfb5a9a-fa92-40c6-84ab-7b6b081cc688" containerName="glance-httpd" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.593962 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance633a-account-delete-4gbtt" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.601744 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance633a-account-delete-4gbtt"] Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.676206 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a-operator-scripts\") pod \"glance633a-account-delete-4gbtt\" (UID: \"1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a\") " pod="glance-kuttl-tests/glance633a-account-delete-4gbtt" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.676546 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7m2ns\" (UniqueName: \"kubernetes.io/projected/1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a-kube-api-access-7m2ns\") pod \"glance633a-account-delete-4gbtt\" (UID: \"1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a\") " pod="glance-kuttl-tests/glance633a-account-delete-4gbtt" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.777784 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a-operator-scripts\") pod \"glance633a-account-delete-4gbtt\" (UID: \"1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a\") " pod="glance-kuttl-tests/glance633a-account-delete-4gbtt" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.777864 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7m2ns\" (UniqueName: \"kubernetes.io/projected/1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a-kube-api-access-7m2ns\") pod \"glance633a-account-delete-4gbtt\" (UID: \"1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a\") " pod="glance-kuttl-tests/glance633a-account-delete-4gbtt" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.778577 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a-operator-scripts\") pod \"glance633a-account-delete-4gbtt\" (UID: \"1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a\") " pod="glance-kuttl-tests/glance633a-account-delete-4gbtt" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.798833 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7m2ns\" (UniqueName: \"kubernetes.io/projected/1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a-kube-api-access-7m2ns\") pod \"glance633a-account-delete-4gbtt\" (UID: \"1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a\") " pod="glance-kuttl-tests/glance633a-account-delete-4gbtt" Jan 31 15:04:37 crc kubenswrapper[4751]: I0131 15:04:37.910372 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance633a-account-delete-4gbtt" Jan 31 15:04:38 crc kubenswrapper[4751]: I0131 15:04:38.342222 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance633a-account-delete-4gbtt"] Jan 31 15:04:38 crc kubenswrapper[4751]: I0131 15:04:38.414813 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d" path="/var/lib/kubelet/pods/46fa6f5a-9f30-4e19-8b53-b8aa7c3a533d/volumes" Jan 31 15:04:38 crc kubenswrapper[4751]: I0131 15:04:38.415551 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e91c4dc3-9319-4e4b-951a-4e1f117c3215" path="/var/lib/kubelet/pods/e91c4dc3-9319-4e4b-951a-4e1f117c3215/volumes" Jan 31 15:04:38 crc kubenswrapper[4751]: I0131 15:04:38.997902 4751 generic.go:334] "Generic (PLEG): container finished" podID="1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a" containerID="c486e82ff06dabf3bbaf584cc05f4bf167ea45034bb1b4f577adb93e884d0e62" exitCode=0 Jan 31 15:04:38 crc kubenswrapper[4751]: I0131 15:04:38.998009 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance633a-account-delete-4gbtt" event={"ID":"1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a","Type":"ContainerDied","Data":"c486e82ff06dabf3bbaf584cc05f4bf167ea45034bb1b4f577adb93e884d0e62"} Jan 31 15:04:38 crc kubenswrapper[4751]: I0131 15:04:38.998223 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance633a-account-delete-4gbtt" event={"ID":"1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a","Type":"ContainerStarted","Data":"1a1bc68638ca8cbc0b488a80037c2b24dabf4fc76297519708441cd3c84fc4ca"} Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.310475 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance633a-account-delete-4gbtt" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.412749 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a-operator-scripts\") pod \"1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a\" (UID: \"1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a\") " Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.412924 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7m2ns\" (UniqueName: \"kubernetes.io/projected/1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a-kube-api-access-7m2ns\") pod \"1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a\" (UID: \"1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a\") " Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.413796 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a" (UID: "1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.423256 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a-kube-api-access-7m2ns" (OuterVolumeSpecName: "kube-api-access-7m2ns") pod "1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a" (UID: "1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a"). InnerVolumeSpecName "kube-api-access-7m2ns". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.515238 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.515295 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7m2ns\" (UniqueName: \"kubernetes.io/projected/1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a-kube-api-access-7m2ns\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.848471 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 15:04:40 crc kubenswrapper[4751]: E0131 15:04:40.848809 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a" containerName="mariadb-account-delete" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.848824 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a" containerName="mariadb-account-delete" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.848936 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a" containerName="mariadb-account-delete" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.849399 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.851350 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"default-dockercfg-gh2c4" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.851598 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"openstack-config-secret" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.852100 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-scripts-9db6gc427h" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.852384 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"glance-kuttl-tests"/"openstack-config" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.869434 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.922037 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config\") pod \"openstackclient\" (UID: \"eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb\") " pod="glance-kuttl-tests/openstackclient" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.922110 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5qxht\" (UniqueName: \"kubernetes.io/projected/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-kube-api-access-5qxht\") pod \"openstackclient\" (UID: \"eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb\") " pod="glance-kuttl-tests/openstackclient" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.922167 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret\") pod \"openstackclient\" (UID: \"eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb\") " pod="glance-kuttl-tests/openstackclient" Jan 31 15:04:40 crc kubenswrapper[4751]: I0131 15:04:40.922217 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-scripts\") pod \"openstackclient\" (UID: \"eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb\") " pod="glance-kuttl-tests/openstackclient" Jan 31 15:04:41 crc kubenswrapper[4751]: I0131 15:04:41.016244 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance633a-account-delete-4gbtt" event={"ID":"1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a","Type":"ContainerDied","Data":"1a1bc68638ca8cbc0b488a80037c2b24dabf4fc76297519708441cd3c84fc4ca"} Jan 31 15:04:41 crc kubenswrapper[4751]: I0131 15:04:41.016307 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a1bc68638ca8cbc0b488a80037c2b24dabf4fc76297519708441cd3c84fc4ca" Jan 31 15:04:41 crc kubenswrapper[4751]: I0131 15:04:41.016587 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance633a-account-delete-4gbtt" Jan 31 15:04:41 crc kubenswrapper[4751]: I0131 15:04:41.023462 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-scripts\") pod \"openstackclient\" (UID: \"eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb\") " pod="glance-kuttl-tests/openstackclient" Jan 31 15:04:41 crc kubenswrapper[4751]: I0131 15:04:41.023543 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config\") pod \"openstackclient\" (UID: \"eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb\") " pod="glance-kuttl-tests/openstackclient" Jan 31 15:04:41 crc kubenswrapper[4751]: I0131 15:04:41.023575 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5qxht\" (UniqueName: \"kubernetes.io/projected/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-kube-api-access-5qxht\") pod \"openstackclient\" (UID: \"eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb\") " pod="glance-kuttl-tests/openstackclient" Jan 31 15:04:41 crc kubenswrapper[4751]: I0131 15:04:41.023621 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret\") pod \"openstackclient\" (UID: \"eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb\") " pod="glance-kuttl-tests/openstackclient" Jan 31 15:04:41 crc kubenswrapper[4751]: I0131 15:04:41.024789 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config\" (UniqueName: \"kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config\") pod \"openstackclient\" (UID: \"eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb\") " pod="glance-kuttl-tests/openstackclient" Jan 31 15:04:41 crc kubenswrapper[4751]: I0131 15:04:41.024800 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-scripts\" (UniqueName: \"kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-scripts\") pod \"openstackclient\" (UID: \"eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb\") " pod="glance-kuttl-tests/openstackclient" Jan 31 15:04:41 crc kubenswrapper[4751]: I0131 15:04:41.032334 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openstack-config-secret\" (UniqueName: \"kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret\") pod \"openstackclient\" (UID: \"eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb\") " pod="glance-kuttl-tests/openstackclient" Jan 31 15:04:41 crc kubenswrapper[4751]: I0131 15:04:41.039808 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5qxht\" (UniqueName: \"kubernetes.io/projected/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-kube-api-access-5qxht\") pod \"openstackclient\" (UID: \"eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb\") " pod="glance-kuttl-tests/openstackclient" Jan 31 15:04:41 crc kubenswrapper[4751]: I0131 15:04:41.178358 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstackclient" Jan 31 15:04:41 crc kubenswrapper[4751]: I0131 15:04:41.598011 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/openstackclient"] Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.024800 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb","Type":"ContainerStarted","Data":"c57156abf0bd178296d0ed3e767e5309d5ca4f93240cd0eeca5761f9adeff9af"} Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.025118 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstackclient" event={"ID":"eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb","Type":"ContainerStarted","Data":"02039631bc193bd76c54ff239695724d1cccb3eda3e0ae5770dc0b2bb3e8c27e"} Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.039594 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/openstackclient" podStartSLOduration=2.039578649 podStartE2EDuration="2.039578649s" podCreationTimestamp="2026-01-31 15:04:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:04:42.037422662 +0000 UTC m=+1386.412135547" watchObservedRunningTime="2026-01-31 15:04:42.039578649 +0000 UTC m=+1386.414291534" Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.627879 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-9z5l9"] Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.636969 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-9z5l9"] Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.648516 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-633a-account-create-update-kmj9d"] Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.654021 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance633a-account-delete-4gbtt"] Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.659306 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-633a-account-create-update-kmj9d"] Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.670006 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance633a-account-delete-4gbtt"] Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.814247 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-create-bbbff"] Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.815336 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-bbbff" Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.820113 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-fb75-account-create-update-8nfdw"] Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.821439 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-fb75-account-create-update-8nfdw" Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.823592 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-db-secret" Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.824428 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-bbbff"] Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.835148 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-fb75-account-create-update-8nfdw"] Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.879541 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-p87fj\" (UniqueName: \"kubernetes.io/projected/88adbd16-7694-4f3b-8de1-b15932042491-kube-api-access-p87fj\") pod \"glance-db-create-bbbff\" (UID: \"88adbd16-7694-4f3b-8de1-b15932042491\") " pod="glance-kuttl-tests/glance-db-create-bbbff" Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.879628 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d2dc104-ad94-47b2-add7-9314eb88e5b0-operator-scripts\") pod \"glance-fb75-account-create-update-8nfdw\" (UID: \"2d2dc104-ad94-47b2-add7-9314eb88e5b0\") " pod="glance-kuttl-tests/glance-fb75-account-create-update-8nfdw" Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.879677 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vcngm\" (UniqueName: \"kubernetes.io/projected/2d2dc104-ad94-47b2-add7-9314eb88e5b0-kube-api-access-vcngm\") pod \"glance-fb75-account-create-update-8nfdw\" (UID: \"2d2dc104-ad94-47b2-add7-9314eb88e5b0\") " pod="glance-kuttl-tests/glance-fb75-account-create-update-8nfdw" Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.879715 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88adbd16-7694-4f3b-8de1-b15932042491-operator-scripts\") pod \"glance-db-create-bbbff\" (UID: \"88adbd16-7694-4f3b-8de1-b15932042491\") " pod="glance-kuttl-tests/glance-db-create-bbbff" Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.980497 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-p87fj\" (UniqueName: \"kubernetes.io/projected/88adbd16-7694-4f3b-8de1-b15932042491-kube-api-access-p87fj\") pod \"glance-db-create-bbbff\" (UID: \"88adbd16-7694-4f3b-8de1-b15932042491\") " pod="glance-kuttl-tests/glance-db-create-bbbff" Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.980564 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d2dc104-ad94-47b2-add7-9314eb88e5b0-operator-scripts\") pod \"glance-fb75-account-create-update-8nfdw\" (UID: \"2d2dc104-ad94-47b2-add7-9314eb88e5b0\") " pod="glance-kuttl-tests/glance-fb75-account-create-update-8nfdw" Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.980600 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vcngm\" (UniqueName: \"kubernetes.io/projected/2d2dc104-ad94-47b2-add7-9314eb88e5b0-kube-api-access-vcngm\") pod \"glance-fb75-account-create-update-8nfdw\" (UID: \"2d2dc104-ad94-47b2-add7-9314eb88e5b0\") " pod="glance-kuttl-tests/glance-fb75-account-create-update-8nfdw" Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.980629 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88adbd16-7694-4f3b-8de1-b15932042491-operator-scripts\") pod \"glance-db-create-bbbff\" (UID: \"88adbd16-7694-4f3b-8de1-b15932042491\") " pod="glance-kuttl-tests/glance-db-create-bbbff" Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.981343 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88adbd16-7694-4f3b-8de1-b15932042491-operator-scripts\") pod \"glance-db-create-bbbff\" (UID: \"88adbd16-7694-4f3b-8de1-b15932042491\") " pod="glance-kuttl-tests/glance-db-create-bbbff" Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.981845 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d2dc104-ad94-47b2-add7-9314eb88e5b0-operator-scripts\") pod \"glance-fb75-account-create-update-8nfdw\" (UID: \"2d2dc104-ad94-47b2-add7-9314eb88e5b0\") " pod="glance-kuttl-tests/glance-fb75-account-create-update-8nfdw" Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.998116 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-p87fj\" (UniqueName: \"kubernetes.io/projected/88adbd16-7694-4f3b-8de1-b15932042491-kube-api-access-p87fj\") pod \"glance-db-create-bbbff\" (UID: \"88adbd16-7694-4f3b-8de1-b15932042491\") " pod="glance-kuttl-tests/glance-db-create-bbbff" Jan 31 15:04:42 crc kubenswrapper[4751]: I0131 15:04:42.999210 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vcngm\" (UniqueName: \"kubernetes.io/projected/2d2dc104-ad94-47b2-add7-9314eb88e5b0-kube-api-access-vcngm\") pod \"glance-fb75-account-create-update-8nfdw\" (UID: \"2d2dc104-ad94-47b2-add7-9314eb88e5b0\") " pod="glance-kuttl-tests/glance-fb75-account-create-update-8nfdw" Jan 31 15:04:43 crc kubenswrapper[4751]: I0131 15:04:43.139889 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-bbbff" Jan 31 15:04:43 crc kubenswrapper[4751]: I0131 15:04:43.157260 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-fb75-account-create-update-8nfdw" Jan 31 15:04:43 crc kubenswrapper[4751]: I0131 15:04:43.582951 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-create-bbbff"] Jan 31 15:04:43 crc kubenswrapper[4751]: W0131 15:04:43.590945 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod88adbd16_7694_4f3b_8de1_b15932042491.slice/crio-2a073f1ce887077ba61fb8ce7661d5c057ed31d457d77d5994a4916ab666cd3c WatchSource:0}: Error finding container 2a073f1ce887077ba61fb8ce7661d5c057ed31d457d77d5994a4916ab666cd3c: Status 404 returned error can't find the container with id 2a073f1ce887077ba61fb8ce7661d5c057ed31d457d77d5994a4916ab666cd3c Jan 31 15:04:43 crc kubenswrapper[4751]: I0131 15:04:43.641473 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-fb75-account-create-update-8nfdw"] Jan 31 15:04:43 crc kubenswrapper[4751]: W0131 15:04:43.652156 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2d2dc104_ad94_47b2_add7_9314eb88e5b0.slice/crio-4567af6c9694bff5a0fadc261de59c39482b429c7e9f3decda303815f32bc220 WatchSource:0}: Error finding container 4567af6c9694bff5a0fadc261de59c39482b429c7e9f3decda303815f32bc220: Status 404 returned error can't find the container with id 4567af6c9694bff5a0fadc261de59c39482b429c7e9f3decda303815f32bc220 Jan 31 15:04:44 crc kubenswrapper[4751]: I0131 15:04:44.047800 4751 generic.go:334] "Generic (PLEG): container finished" podID="2d2dc104-ad94-47b2-add7-9314eb88e5b0" containerID="a87d6cd135483f11e653acd5122adb7f7e32f94e7051f9157fa4ae04850a4813" exitCode=0 Jan 31 15:04:44 crc kubenswrapper[4751]: I0131 15:04:44.048275 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-fb75-account-create-update-8nfdw" event={"ID":"2d2dc104-ad94-47b2-add7-9314eb88e5b0","Type":"ContainerDied","Data":"a87d6cd135483f11e653acd5122adb7f7e32f94e7051f9157fa4ae04850a4813"} Jan 31 15:04:44 crc kubenswrapper[4751]: I0131 15:04:44.048307 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-fb75-account-create-update-8nfdw" event={"ID":"2d2dc104-ad94-47b2-add7-9314eb88e5b0","Type":"ContainerStarted","Data":"4567af6c9694bff5a0fadc261de59c39482b429c7e9f3decda303815f32bc220"} Jan 31 15:04:44 crc kubenswrapper[4751]: I0131 15:04:44.054968 4751 generic.go:334] "Generic (PLEG): container finished" podID="88adbd16-7694-4f3b-8de1-b15932042491" containerID="77d9f01225cc43eac33fe40d8bc014694a35ae20a7b11d1e4c070bd741ce303a" exitCode=0 Jan 31 15:04:44 crc kubenswrapper[4751]: I0131 15:04:44.055008 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-bbbff" event={"ID":"88adbd16-7694-4f3b-8de1-b15932042491","Type":"ContainerDied","Data":"77d9f01225cc43eac33fe40d8bc014694a35ae20a7b11d1e4c070bd741ce303a"} Jan 31 15:04:44 crc kubenswrapper[4751]: I0131 15:04:44.055029 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-bbbff" event={"ID":"88adbd16-7694-4f3b-8de1-b15932042491","Type":"ContainerStarted","Data":"2a073f1ce887077ba61fb8ce7661d5c057ed31d457d77d5994a4916ab666cd3c"} Jan 31 15:04:44 crc kubenswrapper[4751]: I0131 15:04:44.417831 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a" path="/var/lib/kubelet/pods/1e25c4e6-94f1-4729-bcb2-ddb3200ccf3a/volumes" Jan 31 15:04:44 crc kubenswrapper[4751]: I0131 15:04:44.418578 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4d66e78e-6853-45e7-966f-cd9ec9586439" path="/var/lib/kubelet/pods/4d66e78e-6853-45e7-966f-cd9ec9586439/volumes" Jan 31 15:04:44 crc kubenswrapper[4751]: I0131 15:04:44.419047 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8028c623-f182-4e00-9c6d-c864a023abb5" path="/var/lib/kubelet/pods/8028c623-f182-4e00-9c6d-c864a023abb5/volumes" Jan 31 15:04:45 crc kubenswrapper[4751]: I0131 15:04:45.406424 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-fb75-account-create-update-8nfdw" Jan 31 15:04:45 crc kubenswrapper[4751]: I0131 15:04:45.413131 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-bbbff" Jan 31 15:04:45 crc kubenswrapper[4751]: I0131 15:04:45.532908 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vcngm\" (UniqueName: \"kubernetes.io/projected/2d2dc104-ad94-47b2-add7-9314eb88e5b0-kube-api-access-vcngm\") pod \"2d2dc104-ad94-47b2-add7-9314eb88e5b0\" (UID: \"2d2dc104-ad94-47b2-add7-9314eb88e5b0\") " Jan 31 15:04:45 crc kubenswrapper[4751]: I0131 15:04:45.533015 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88adbd16-7694-4f3b-8de1-b15932042491-operator-scripts\") pod \"88adbd16-7694-4f3b-8de1-b15932042491\" (UID: \"88adbd16-7694-4f3b-8de1-b15932042491\") " Jan 31 15:04:45 crc kubenswrapper[4751]: I0131 15:04:45.533091 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d2dc104-ad94-47b2-add7-9314eb88e5b0-operator-scripts\") pod \"2d2dc104-ad94-47b2-add7-9314eb88e5b0\" (UID: \"2d2dc104-ad94-47b2-add7-9314eb88e5b0\") " Jan 31 15:04:45 crc kubenswrapper[4751]: I0131 15:04:45.533168 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p87fj\" (UniqueName: \"kubernetes.io/projected/88adbd16-7694-4f3b-8de1-b15932042491-kube-api-access-p87fj\") pod \"88adbd16-7694-4f3b-8de1-b15932042491\" (UID: \"88adbd16-7694-4f3b-8de1-b15932042491\") " Jan 31 15:04:45 crc kubenswrapper[4751]: I0131 15:04:45.539332 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/88adbd16-7694-4f3b-8de1-b15932042491-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "88adbd16-7694-4f3b-8de1-b15932042491" (UID: "88adbd16-7694-4f3b-8de1-b15932042491"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:04:45 crc kubenswrapper[4751]: I0131 15:04:45.539738 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2d2dc104-ad94-47b2-add7-9314eb88e5b0-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "2d2dc104-ad94-47b2-add7-9314eb88e5b0" (UID: "2d2dc104-ad94-47b2-add7-9314eb88e5b0"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:04:45 crc kubenswrapper[4751]: I0131 15:04:45.549269 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/88adbd16-7694-4f3b-8de1-b15932042491-kube-api-access-p87fj" (OuterVolumeSpecName: "kube-api-access-p87fj") pod "88adbd16-7694-4f3b-8de1-b15932042491" (UID: "88adbd16-7694-4f3b-8de1-b15932042491"). InnerVolumeSpecName "kube-api-access-p87fj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:04:45 crc kubenswrapper[4751]: I0131 15:04:45.551377 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d2dc104-ad94-47b2-add7-9314eb88e5b0-kube-api-access-vcngm" (OuterVolumeSpecName: "kube-api-access-vcngm") pod "2d2dc104-ad94-47b2-add7-9314eb88e5b0" (UID: "2d2dc104-ad94-47b2-add7-9314eb88e5b0"). InnerVolumeSpecName "kube-api-access-vcngm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:04:45 crc kubenswrapper[4751]: I0131 15:04:45.635274 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-p87fj\" (UniqueName: \"kubernetes.io/projected/88adbd16-7694-4f3b-8de1-b15932042491-kube-api-access-p87fj\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:45 crc kubenswrapper[4751]: I0131 15:04:45.635329 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vcngm\" (UniqueName: \"kubernetes.io/projected/2d2dc104-ad94-47b2-add7-9314eb88e5b0-kube-api-access-vcngm\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:45 crc kubenswrapper[4751]: I0131 15:04:45.635343 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/88adbd16-7694-4f3b-8de1-b15932042491-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:45 crc kubenswrapper[4751]: I0131 15:04:45.635354 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/2d2dc104-ad94-47b2-add7-9314eb88e5b0-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:46 crc kubenswrapper[4751]: I0131 15:04:46.075035 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-fb75-account-create-update-8nfdw" Jan 31 15:04:46 crc kubenswrapper[4751]: I0131 15:04:46.075049 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-fb75-account-create-update-8nfdw" event={"ID":"2d2dc104-ad94-47b2-add7-9314eb88e5b0","Type":"ContainerDied","Data":"4567af6c9694bff5a0fadc261de59c39482b429c7e9f3decda303815f32bc220"} Jan 31 15:04:46 crc kubenswrapper[4751]: I0131 15:04:46.075121 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4567af6c9694bff5a0fadc261de59c39482b429c7e9f3decda303815f32bc220" Jan 31 15:04:46 crc kubenswrapper[4751]: I0131 15:04:46.076949 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-create-bbbff" event={"ID":"88adbd16-7694-4f3b-8de1-b15932042491","Type":"ContainerDied","Data":"2a073f1ce887077ba61fb8ce7661d5c057ed31d457d77d5994a4916ab666cd3c"} Jan 31 15:04:46 crc kubenswrapper[4751]: I0131 15:04:46.076979 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2a073f1ce887077ba61fb8ce7661d5c057ed31d457d77d5994a4916ab666cd3c" Jan 31 15:04:46 crc kubenswrapper[4751]: I0131 15:04:46.077104 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-create-bbbff" Jan 31 15:04:47 crc kubenswrapper[4751]: I0131 15:04:47.946153 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-db-sync-96ldw"] Jan 31 15:04:47 crc kubenswrapper[4751]: E0131 15:04:47.946903 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="88adbd16-7694-4f3b-8de1-b15932042491" containerName="mariadb-database-create" Jan 31 15:04:47 crc kubenswrapper[4751]: I0131 15:04:47.946916 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="88adbd16-7694-4f3b-8de1-b15932042491" containerName="mariadb-database-create" Jan 31 15:04:47 crc kubenswrapper[4751]: E0131 15:04:47.946946 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2d2dc104-ad94-47b2-add7-9314eb88e5b0" containerName="mariadb-account-create-update" Jan 31 15:04:47 crc kubenswrapper[4751]: I0131 15:04:47.946953 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d2dc104-ad94-47b2-add7-9314eb88e5b0" containerName="mariadb-account-create-update" Jan 31 15:04:47 crc kubenswrapper[4751]: I0131 15:04:47.947132 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="88adbd16-7694-4f3b-8de1-b15932042491" containerName="mariadb-database-create" Jan 31 15:04:47 crc kubenswrapper[4751]: I0131 15:04:47.947150 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2d2dc104-ad94-47b2-add7-9314eb88e5b0" containerName="mariadb-account-create-update" Jan 31 15:04:47 crc kubenswrapper[4751]: I0131 15:04:47.947722 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-96ldw" Jan 31 15:04:47 crc kubenswrapper[4751]: I0131 15:04:47.949853 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-config-data" Jan 31 15:04:47 crc kubenswrapper[4751]: I0131 15:04:47.949858 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-mdf2n" Jan 31 15:04:47 crc kubenswrapper[4751]: I0131 15:04:47.973241 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-96ldw"] Jan 31 15:04:48 crc kubenswrapper[4751]: I0131 15:04:48.071735 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/837cbdc7-6443-4789-a796-c2f1bd79119d-db-sync-config-data\") pod \"glance-db-sync-96ldw\" (UID: \"837cbdc7-6443-4789-a796-c2f1bd79119d\") " pod="glance-kuttl-tests/glance-db-sync-96ldw" Jan 31 15:04:48 crc kubenswrapper[4751]: I0131 15:04:48.071819 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8jzf\" (UniqueName: \"kubernetes.io/projected/837cbdc7-6443-4789-a796-c2f1bd79119d-kube-api-access-w8jzf\") pod \"glance-db-sync-96ldw\" (UID: \"837cbdc7-6443-4789-a796-c2f1bd79119d\") " pod="glance-kuttl-tests/glance-db-sync-96ldw" Jan 31 15:04:48 crc kubenswrapper[4751]: I0131 15:04:48.071868 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/837cbdc7-6443-4789-a796-c2f1bd79119d-config-data\") pod \"glance-db-sync-96ldw\" (UID: \"837cbdc7-6443-4789-a796-c2f1bd79119d\") " pod="glance-kuttl-tests/glance-db-sync-96ldw" Jan 31 15:04:48 crc kubenswrapper[4751]: I0131 15:04:48.173847 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/837cbdc7-6443-4789-a796-c2f1bd79119d-db-sync-config-data\") pod \"glance-db-sync-96ldw\" (UID: \"837cbdc7-6443-4789-a796-c2f1bd79119d\") " pod="glance-kuttl-tests/glance-db-sync-96ldw" Jan 31 15:04:48 crc kubenswrapper[4751]: I0131 15:04:48.173959 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w8jzf\" (UniqueName: \"kubernetes.io/projected/837cbdc7-6443-4789-a796-c2f1bd79119d-kube-api-access-w8jzf\") pod \"glance-db-sync-96ldw\" (UID: \"837cbdc7-6443-4789-a796-c2f1bd79119d\") " pod="glance-kuttl-tests/glance-db-sync-96ldw" Jan 31 15:04:48 crc kubenswrapper[4751]: I0131 15:04:48.174064 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/837cbdc7-6443-4789-a796-c2f1bd79119d-config-data\") pod \"glance-db-sync-96ldw\" (UID: \"837cbdc7-6443-4789-a796-c2f1bd79119d\") " pod="glance-kuttl-tests/glance-db-sync-96ldw" Jan 31 15:04:48 crc kubenswrapper[4751]: I0131 15:04:48.185269 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/837cbdc7-6443-4789-a796-c2f1bd79119d-config-data\") pod \"glance-db-sync-96ldw\" (UID: \"837cbdc7-6443-4789-a796-c2f1bd79119d\") " pod="glance-kuttl-tests/glance-db-sync-96ldw" Jan 31 15:04:48 crc kubenswrapper[4751]: I0131 15:04:48.185553 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/837cbdc7-6443-4789-a796-c2f1bd79119d-db-sync-config-data\") pod \"glance-db-sync-96ldw\" (UID: \"837cbdc7-6443-4789-a796-c2f1bd79119d\") " pod="glance-kuttl-tests/glance-db-sync-96ldw" Jan 31 15:04:48 crc kubenswrapper[4751]: I0131 15:04:48.197580 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8jzf\" (UniqueName: \"kubernetes.io/projected/837cbdc7-6443-4789-a796-c2f1bd79119d-kube-api-access-w8jzf\") pod \"glance-db-sync-96ldw\" (UID: \"837cbdc7-6443-4789-a796-c2f1bd79119d\") " pod="glance-kuttl-tests/glance-db-sync-96ldw" Jan 31 15:04:48 crc kubenswrapper[4751]: I0131 15:04:48.287550 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-96ldw" Jan 31 15:04:48 crc kubenswrapper[4751]: I0131 15:04:48.532202 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-db-sync-96ldw"] Jan 31 15:04:49 crc kubenswrapper[4751]: I0131 15:04:49.098387 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-96ldw" event={"ID":"837cbdc7-6443-4789-a796-c2f1bd79119d","Type":"ContainerStarted","Data":"8c9ca246c6d8d22550b0a337fe277ae3824da506216668e0ce9b2ebcd4cee908"} Jan 31 15:04:49 crc kubenswrapper[4751]: I0131 15:04:49.098747 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-96ldw" event={"ID":"837cbdc7-6443-4789-a796-c2f1bd79119d","Type":"ContainerStarted","Data":"826ef65431d89ba10bddb0143acb4f6024aa6af60be7b453598c0fcfebc2d4cc"} Jan 31 15:04:49 crc kubenswrapper[4751]: I0131 15:04:49.118729 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-db-sync-96ldw" podStartSLOduration=2.118704519 podStartE2EDuration="2.118704519s" podCreationTimestamp="2026-01-31 15:04:47 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:04:49.112344621 +0000 UTC m=+1393.487057516" watchObservedRunningTime="2026-01-31 15:04:49.118704519 +0000 UTC m=+1393.493417404" Jan 31 15:04:52 crc kubenswrapper[4751]: I0131 15:04:52.128597 4751 generic.go:334] "Generic (PLEG): container finished" podID="837cbdc7-6443-4789-a796-c2f1bd79119d" containerID="8c9ca246c6d8d22550b0a337fe277ae3824da506216668e0ce9b2ebcd4cee908" exitCode=0 Jan 31 15:04:52 crc kubenswrapper[4751]: I0131 15:04:52.128675 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-96ldw" event={"ID":"837cbdc7-6443-4789-a796-c2f1bd79119d","Type":"ContainerDied","Data":"8c9ca246c6d8d22550b0a337fe277ae3824da506216668e0ce9b2ebcd4cee908"} Jan 31 15:04:53 crc kubenswrapper[4751]: I0131 15:04:53.486530 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-96ldw" Jan 31 15:04:53 crc kubenswrapper[4751]: I0131 15:04:53.650755 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/837cbdc7-6443-4789-a796-c2f1bd79119d-db-sync-config-data\") pod \"837cbdc7-6443-4789-a796-c2f1bd79119d\" (UID: \"837cbdc7-6443-4789-a796-c2f1bd79119d\") " Jan 31 15:04:53 crc kubenswrapper[4751]: I0131 15:04:53.650844 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/837cbdc7-6443-4789-a796-c2f1bd79119d-config-data\") pod \"837cbdc7-6443-4789-a796-c2f1bd79119d\" (UID: \"837cbdc7-6443-4789-a796-c2f1bd79119d\") " Jan 31 15:04:53 crc kubenswrapper[4751]: I0131 15:04:53.651151 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8jzf\" (UniqueName: \"kubernetes.io/projected/837cbdc7-6443-4789-a796-c2f1bd79119d-kube-api-access-w8jzf\") pod \"837cbdc7-6443-4789-a796-c2f1bd79119d\" (UID: \"837cbdc7-6443-4789-a796-c2f1bd79119d\") " Jan 31 15:04:53 crc kubenswrapper[4751]: I0131 15:04:53.655634 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/837cbdc7-6443-4789-a796-c2f1bd79119d-db-sync-config-data" (OuterVolumeSpecName: "db-sync-config-data") pod "837cbdc7-6443-4789-a796-c2f1bd79119d" (UID: "837cbdc7-6443-4789-a796-c2f1bd79119d"). InnerVolumeSpecName "db-sync-config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:04:53 crc kubenswrapper[4751]: I0131 15:04:53.655638 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/837cbdc7-6443-4789-a796-c2f1bd79119d-kube-api-access-w8jzf" (OuterVolumeSpecName: "kube-api-access-w8jzf") pod "837cbdc7-6443-4789-a796-c2f1bd79119d" (UID: "837cbdc7-6443-4789-a796-c2f1bd79119d"). InnerVolumeSpecName "kube-api-access-w8jzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:04:53 crc kubenswrapper[4751]: I0131 15:04:53.719386 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/837cbdc7-6443-4789-a796-c2f1bd79119d-config-data" (OuterVolumeSpecName: "config-data") pod "837cbdc7-6443-4789-a796-c2f1bd79119d" (UID: "837cbdc7-6443-4789-a796-c2f1bd79119d"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:04:53 crc kubenswrapper[4751]: I0131 15:04:53.753686 4751 reconciler_common.go:293] "Volume detached for volume \"db-sync-config-data\" (UniqueName: \"kubernetes.io/secret/837cbdc7-6443-4789-a796-c2f1bd79119d-db-sync-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:53 crc kubenswrapper[4751]: I0131 15:04:53.753740 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/837cbdc7-6443-4789-a796-c2f1bd79119d-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:53 crc kubenswrapper[4751]: I0131 15:04:53.753761 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w8jzf\" (UniqueName: \"kubernetes.io/projected/837cbdc7-6443-4789-a796-c2f1bd79119d-kube-api-access-w8jzf\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:54 crc kubenswrapper[4751]: I0131 15:04:54.152731 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-db-sync-96ldw" event={"ID":"837cbdc7-6443-4789-a796-c2f1bd79119d","Type":"ContainerDied","Data":"826ef65431d89ba10bddb0143acb4f6024aa6af60be7b453598c0fcfebc2d4cc"} Jan 31 15:04:54 crc kubenswrapper[4751]: I0131 15:04:54.152774 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="826ef65431d89ba10bddb0143acb4f6024aa6af60be7b453598c0fcfebc2d4cc" Jan 31 15:04:54 crc kubenswrapper[4751]: I0131 15:04:54.153181 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-db-sync-96ldw" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.307022 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:04:55 crc kubenswrapper[4751]: E0131 15:04:55.307393 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="837cbdc7-6443-4789-a796-c2f1bd79119d" containerName="glance-db-sync" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.307410 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="837cbdc7-6443-4789-a796-c2f1bd79119d" containerName="glance-db-sync" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.307584 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="837cbdc7-6443-4789-a796-c2f1bd79119d" containerName="glance-db-sync" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.308513 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.313794 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-scripts" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.313797 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-external-config-data" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.316342 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.317943 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-glance-dockercfg-mdf2n" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.479200 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-scripts\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.479250 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.479273 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-config-data\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.479293 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-logs\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.479337 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjmr8\" (UniqueName: \"kubernetes.io/projected/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-kube-api-access-rjmr8\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.479359 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-sys\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.479385 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.479404 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.479423 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-dev\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.479437 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.479473 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.479493 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-run\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.479506 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.479526 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.541319 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.542762 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.572265 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.580991 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-run\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581029 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581054 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581100 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-scripts\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581121 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581149 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-config-data\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581166 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-logs\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581219 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjmr8\" (UniqueName: \"kubernetes.io/projected/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-kube-api-access-rjmr8\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581235 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-sys\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581267 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581287 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581305 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-dev\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581320 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581354 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581433 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-etc-nvme\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581468 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-run\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581490 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-etc-iscsi\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.581817 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") device mount path \"/mnt/openstack/pv13\"" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.582463 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-httpd-run\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.583043 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-sys\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.583578 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-logs\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.583633 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-lib-modules\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.583973 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") device mount path \"/mnt/openstack/pv17\"" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.585293 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-dev\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.585351 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-var-locks-brick\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.591377 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-scripts\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.593926 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-config-data\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.618451 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.625509 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjmr8\" (UniqueName: \"kubernetes.io/projected/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-kube-api-access-rjmr8\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.647276 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"glance-default-external-api-1\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.662311 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.663668 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.671607 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"glance-default-internal-config-data" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.689376 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.689438 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.689463 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1590321-a9e4-43b9-a9a0-04dc832b3332-config-data\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.689492 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.689523 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-dev\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.689566 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.689589 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-run\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.689612 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1590321-a9e4-43b9-a9a0-04dc832b3332-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.689646 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.689675 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1590321-a9e4-43b9-a9a0-04dc832b3332-scripts\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.689706 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1590321-a9e4-43b9-a9a0-04dc832b3332-logs\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.689727 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.689770 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g9x4g\" (UniqueName: \"kubernetes.io/projected/d1590321-a9e4-43b9-a9a0-04dc832b3332-kube-api-access-g9x4g\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.689795 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-sys\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.692555 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.693890 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.703110 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.710711 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791215 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791261 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b39c23a2-492d-4401-bb73-6b4bfc849bec-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791289 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b39c23a2-492d-4401-bb73-6b4bfc849bec-config-data\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791339 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b39c23a2-492d-4401-bb73-6b4bfc849bec-scripts\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791355 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791468 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g9x4g\" (UniqueName: \"kubernetes.io/projected/d1590321-a9e4-43b9-a9a0-04dc832b3332-kube-api-access-g9x4g\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791534 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791561 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791587 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791633 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-sys\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791691 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-dev\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791778 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-sys\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791798 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791863 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-sys\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791889 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791916 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-dev\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.791991 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792040 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792081 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792156 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792189 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792217 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1590321-a9e4-43b9-a9a0-04dc832b3332-config-data\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792244 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792271 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792278 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792361 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792396 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792396 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792447 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-dev\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792369 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") device mount path \"/mnt/openstack/pv02\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792466 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-run\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792484 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-sys\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792508 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-dev\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792555 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792609 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792642 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-run\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792663 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792684 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1590321-a9e4-43b9-a9a0-04dc832b3332-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792691 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-run\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792727 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792753 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-run\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792773 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1590321-a9e4-43b9-a9a0-04dc832b3332-scripts\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792789 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b39c23a2-492d-4401-bb73-6b4bfc849bec-logs\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792807 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792836 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mj5bx\" (UniqueName: \"kubernetes.io/projected/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-kube-api-access-mj5bx\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792863 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792893 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1590321-a9e4-43b9-a9a0-04dc832b3332-logs\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792912 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b65vr\" (UniqueName: \"kubernetes.io/projected/b39c23a2-492d-4401-bb73-6b4bfc849bec-kube-api-access-b65vr\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792939 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.792957 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.793039 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.793317 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1590321-a9e4-43b9-a9a0-04dc832b3332-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.793335 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1590321-a9e4-43b9-a9a0-04dc832b3332-logs\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.796524 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1590321-a9e4-43b9-a9a0-04dc832b3332-scripts\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.799288 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1590321-a9e4-43b9-a9a0-04dc832b3332-config-data\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.810786 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g9x4g\" (UniqueName: \"kubernetes.io/projected/d1590321-a9e4-43b9-a9a0-04dc832b3332-kube-api-access-g9x4g\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.813677 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.815934 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.863932 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.893915 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-run\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.894095 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b39c23a2-492d-4401-bb73-6b4bfc849bec-logs\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.894208 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.894299 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mj5bx\" (UniqueName: \"kubernetes.io/projected/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-kube-api-access-mj5bx\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.894387 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.894473 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-b65vr\" (UniqueName: \"kubernetes.io/projected/b39c23a2-492d-4401-bb73-6b4bfc849bec-kube-api-access-b65vr\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.894566 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.894681 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b39c23a2-492d-4401-bb73-6b4bfc849bec-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.894779 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b39c23a2-492d-4401-bb73-6b4bfc849bec-config-data\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.894879 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b39c23a2-492d-4401-bb73-6b4bfc849bec-scripts\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.894961 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.895036 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.895137 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.895217 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.895299 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-dev\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.895369 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.895436 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-sys\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.895513 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.895579 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-dev\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.895656 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.895726 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.895799 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.895868 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.895935 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.896036 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.896142 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-run\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.896265 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-sys\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.896349 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.896503 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.896595 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-run\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.897017 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b39c23a2-492d-4401-bb73-6b4bfc849bec-logs\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.897529 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-dev\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.898253 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.898323 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-sys\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.898366 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") device mount path \"/mnt/openstack/pv18\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.898685 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.898755 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.899000 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b39c23a2-492d-4401-bb73-6b4bfc849bec-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.900707 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-dev\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.900790 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.900834 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.900869 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-run\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.900863 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.900898 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") device mount path \"/mnt/openstack/pv16\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.900942 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-sys\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.900954 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") device mount path \"/mnt/openstack/pv15\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.900980 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.900995 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.901035 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.902210 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.903568 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.904083 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-logs\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.905965 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b39c23a2-492d-4401-bb73-6b4bfc849bec-config-data\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.909739 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b39c23a2-492d-4401-bb73-6b4bfc849bec-scripts\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.920691 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mj5bx\" (UniqueName: \"kubernetes.io/projected/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-kube-api-access-mj5bx\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.922728 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-b65vr\" (UniqueName: \"kubernetes.io/projected/b39c23a2-492d-4401-bb73-6b4bfc849bec-kube-api-access-b65vr\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.927489 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.935483 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.945242 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.959141 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-1\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:55 crc kubenswrapper[4751]: I0131 15:04:55.982458 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:56 crc kubenswrapper[4751]: I0131 15:04:56.019710 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:04:56 crc kubenswrapper[4751]: I0131 15:04:56.027846 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:56 crc kubenswrapper[4751]: I0131 15:04:56.299502 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:04:56 crc kubenswrapper[4751]: W0131 15:04:56.302128 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podd1590321_a9e4_43b9_a9a0_04dc832b3332.slice/crio-74ea627d68211382ddcd6045fc064247fa1aa0e13ee2656765ee16aee6d9bb61 WatchSource:0}: Error finding container 74ea627d68211382ddcd6045fc064247fa1aa0e13ee2656765ee16aee6d9bb61: Status 404 returned error can't find the container with id 74ea627d68211382ddcd6045fc064247fa1aa0e13ee2656765ee16aee6d9bb61 Jan 31 15:04:56 crc kubenswrapper[4751]: I0131 15:04:56.399824 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:04:56 crc kubenswrapper[4751]: W0131 15:04:56.402976 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod2ce40f98_80af_4a4b_8556_c5c7dd84fc58.slice/crio-c11c5ca70f63ee145b6adb16291fa14f1d7247eb5f019288ced5f1338933a04e WatchSource:0}: Error finding container c11c5ca70f63ee145b6adb16291fa14f1d7247eb5f019288ced5f1338933a04e: Status 404 returned error can't find the container with id c11c5ca70f63ee145b6adb16291fa14f1d7247eb5f019288ced5f1338933a04e Jan 31 15:04:56 crc kubenswrapper[4751]: I0131 15:04:56.449935 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:04:56 crc kubenswrapper[4751]: I0131 15:04:56.480596 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:04:56 crc kubenswrapper[4751]: W0131 15:04:56.486528 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb39c23a2_492d_4401_bb73_6b4bfc849bec.slice/crio-995975e00b456601ec8a0bc0c21da9af3666fa6889c9406bb7f743ae54f464a2 WatchSource:0}: Error finding container 995975e00b456601ec8a0bc0c21da9af3666fa6889c9406bb7f743ae54f464a2: Status 404 returned error can't find the container with id 995975e00b456601ec8a0bc0c21da9af3666fa6889c9406bb7f743ae54f464a2 Jan 31 15:04:56 crc kubenswrapper[4751]: I0131 15:04:56.551954 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:04:56 crc kubenswrapper[4751]: W0131 15:04:56.559231 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8309fec8_e5ee_4e23_8617_ab2e7ba833d6.slice/crio-dd5644a60c6d37de5875344e64047668e5f4f00160a8166735d8995e0b9a3862 WatchSource:0}: Error finding container dd5644a60c6d37de5875344e64047668e5f4f00160a8166735d8995e0b9a3862: Status 404 returned error can't find the container with id dd5644a60c6d37de5875344e64047668e5f4f00160a8166735d8995e0b9a3862 Jan 31 15:04:57 crc kubenswrapper[4751]: I0131 15:04:57.176784 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"8309fec8-e5ee-4e23-8617-ab2e7ba833d6","Type":"ContainerStarted","Data":"2af1683a972bf70a8cfc2843e554280e84c5731cd7be66a108127c478a878dd2"} Jan 31 15:04:57 crc kubenswrapper[4751]: I0131 15:04:57.177235 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"8309fec8-e5ee-4e23-8617-ab2e7ba833d6","Type":"ContainerStarted","Data":"dd5644a60c6d37de5875344e64047668e5f4f00160a8166735d8995e0b9a3862"} Jan 31 15:04:57 crc kubenswrapper[4751]: I0131 15:04:57.178156 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"d1590321-a9e4-43b9-a9a0-04dc832b3332","Type":"ContainerStarted","Data":"efd3679f0fb441a01fa19571a9aa5ba37ff46c03bfa892f113c431e3a9022ffe"} Jan 31 15:04:57 crc kubenswrapper[4751]: I0131 15:04:57.178301 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"d1590321-a9e4-43b9-a9a0-04dc832b3332","Type":"ContainerStarted","Data":"74ea627d68211382ddcd6045fc064247fa1aa0e13ee2656765ee16aee6d9bb61"} Jan 31 15:04:57 crc kubenswrapper[4751]: I0131 15:04:57.179586 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2ce40f98-80af-4a4b-8556-c5c7dd84fc58","Type":"ContainerStarted","Data":"e83e3eee6d1bd6eeb18c2480a68530a6737d3f76fac28ccff0e74d019e406b35"} Jan 31 15:04:57 crc kubenswrapper[4751]: I0131 15:04:57.179699 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2ce40f98-80af-4a4b-8556-c5c7dd84fc58","Type":"ContainerStarted","Data":"c11c5ca70f63ee145b6adb16291fa14f1d7247eb5f019288ced5f1338933a04e"} Jan 31 15:04:57 crc kubenswrapper[4751]: I0131 15:04:57.181080 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"b39c23a2-492d-4401-bb73-6b4bfc849bec","Type":"ContainerStarted","Data":"ade6d8caba0bf8e6958b676f72eece1bb73609f4cb6e8cdf0b7220751b79dcad"} Jan 31 15:04:57 crc kubenswrapper[4751]: I0131 15:04:57.181119 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"b39c23a2-492d-4401-bb73-6b4bfc849bec","Type":"ContainerStarted","Data":"995975e00b456601ec8a0bc0c21da9af3666fa6889c9406bb7f743ae54f464a2"} Jan 31 15:04:58 crc kubenswrapper[4751]: I0131 15:04:58.192234 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"d1590321-a9e4-43b9-a9a0-04dc832b3332","Type":"ContainerStarted","Data":"79f170b0f270f110e5b5d5229988d518034d01da466dbe2c3254336ea700274a"} Jan 31 15:04:58 crc kubenswrapper[4751]: I0131 15:04:58.194238 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"b39c23a2-492d-4401-bb73-6b4bfc849bec","Type":"ContainerStarted","Data":"0ccf08a2c3c9ee0e1a6fd7438c1d413c5265de24852c5a25306ee98ea0c2399b"} Jan 31 15:04:58 crc kubenswrapper[4751]: I0131 15:04:58.194368 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="b39c23a2-492d-4401-bb73-6b4bfc849bec" containerName="glance-log" containerID="cri-o://ade6d8caba0bf8e6958b676f72eece1bb73609f4cb6e8cdf0b7220751b79dcad" gracePeriod=30 Jan 31 15:04:58 crc kubenswrapper[4751]: I0131 15:04:58.194674 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="b39c23a2-492d-4401-bb73-6b4bfc849bec" containerName="glance-httpd" containerID="cri-o://0ccf08a2c3c9ee0e1a6fd7438c1d413c5265de24852c5a25306ee98ea0c2399b" gracePeriod=30 Jan 31 15:04:58 crc kubenswrapper[4751]: I0131 15:04:58.221901 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=4.221865722 podStartE2EDuration="4.221865722s" podCreationTimestamp="2026-01-31 15:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:04:58.220792063 +0000 UTC m=+1402.595504968" watchObservedRunningTime="2026-01-31 15:04:58.221865722 +0000 UTC m=+1402.596578647" Jan 31 15:04:58 crc kubenswrapper[4751]: I0131 15:04:58.265091 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=4.265051131 podStartE2EDuration="4.265051131s" podCreationTimestamp="2026-01-31 15:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:04:58.257597254 +0000 UTC m=+1402.632310149" watchObservedRunningTime="2026-01-31 15:04:58.265051131 +0000 UTC m=+1402.639764026" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.203343 4751 generic.go:334] "Generic (PLEG): container finished" podID="b39c23a2-492d-4401-bb73-6b4bfc849bec" containerID="0ccf08a2c3c9ee0e1a6fd7438c1d413c5265de24852c5a25306ee98ea0c2399b" exitCode=143 Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.203735 4751 generic.go:334] "Generic (PLEG): container finished" podID="b39c23a2-492d-4401-bb73-6b4bfc849bec" containerID="ade6d8caba0bf8e6958b676f72eece1bb73609f4cb6e8cdf0b7220751b79dcad" exitCode=143 Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.203417 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"b39c23a2-492d-4401-bb73-6b4bfc849bec","Type":"ContainerDied","Data":"0ccf08a2c3c9ee0e1a6fd7438c1d413c5265de24852c5a25306ee98ea0c2399b"} Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.203772 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"b39c23a2-492d-4401-bb73-6b4bfc849bec","Type":"ContainerDied","Data":"ade6d8caba0bf8e6958b676f72eece1bb73609f4cb6e8cdf0b7220751b79dcad"} Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.205451 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"8309fec8-e5ee-4e23-8617-ab2e7ba833d6","Type":"ContainerStarted","Data":"0da059b6d1edf05d0ffe817ea6cbe4df21ba9242536bbbd9d3e7d479bd096546"} Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.207124 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2ce40f98-80af-4a4b-8556-c5c7dd84fc58","Type":"ContainerStarted","Data":"5b307a953bf028387f69f75e58e1545d2c2c38f81096b44abf4639237badb848"} Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.563151 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.586284 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-1" podStartSLOduration=4.586265291 podStartE2EDuration="4.586265291s" podCreationTimestamp="2026-01-31 15:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:04:59.235250332 +0000 UTC m=+1403.609963217" watchObservedRunningTime="2026-01-31 15:04:59.586265291 +0000 UTC m=+1403.960978176" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.676559 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b39c23a2-492d-4401-bb73-6b4bfc849bec-config-data\") pod \"b39c23a2-492d-4401-bb73-6b4bfc849bec\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.676610 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-run\") pod \"b39c23a2-492d-4401-bb73-6b4bfc849bec\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.676655 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b39c23a2-492d-4401-bb73-6b4bfc849bec-scripts\") pod \"b39c23a2-492d-4401-bb73-6b4bfc849bec\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.676674 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-var-locks-brick\") pod \"b39c23a2-492d-4401-bb73-6b4bfc849bec\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.676691 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-lib-modules\") pod \"b39c23a2-492d-4401-bb73-6b4bfc849bec\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.676712 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b65vr\" (UniqueName: \"kubernetes.io/projected/b39c23a2-492d-4401-bb73-6b4bfc849bec-kube-api-access-b65vr\") pod \"b39c23a2-492d-4401-bb73-6b4bfc849bec\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.676729 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"b39c23a2-492d-4401-bb73-6b4bfc849bec\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.676749 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b39c23a2-492d-4401-bb73-6b4bfc849bec-httpd-run\") pod \"b39c23a2-492d-4401-bb73-6b4bfc849bec\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.676761 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-dev\") pod \"b39c23a2-492d-4401-bb73-6b4bfc849bec\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.676772 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-run" (OuterVolumeSpecName: "run") pod "b39c23a2-492d-4401-bb73-6b4bfc849bec" (UID: "b39c23a2-492d-4401-bb73-6b4bfc849bec"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.676811 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "b39c23a2-492d-4401-bb73-6b4bfc849bec" (UID: "b39c23a2-492d-4401-bb73-6b4bfc849bec"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.676800 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"b39c23a2-492d-4401-bb73-6b4bfc849bec\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.676887 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-sys\") pod \"b39c23a2-492d-4401-bb73-6b4bfc849bec\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.676930 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-etc-iscsi\") pod \"b39c23a2-492d-4401-bb73-6b4bfc849bec\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677012 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b39c23a2-492d-4401-bb73-6b4bfc849bec-logs\") pod \"b39c23a2-492d-4401-bb73-6b4bfc849bec\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677036 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-etc-nvme\") pod \"b39c23a2-492d-4401-bb73-6b4bfc849bec\" (UID: \"b39c23a2-492d-4401-bb73-6b4bfc849bec\") " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677292 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-sys" (OuterVolumeSpecName: "sys") pod "b39c23a2-492d-4401-bb73-6b4bfc849bec" (UID: "b39c23a2-492d-4401-bb73-6b4bfc849bec"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677318 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "b39c23a2-492d-4401-bb73-6b4bfc849bec" (UID: "b39c23a2-492d-4401-bb73-6b4bfc849bec"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677343 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "b39c23a2-492d-4401-bb73-6b4bfc849bec" (UID: "b39c23a2-492d-4401-bb73-6b4bfc849bec"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677402 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "b39c23a2-492d-4401-bb73-6b4bfc849bec" (UID: "b39c23a2-492d-4401-bb73-6b4bfc849bec"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677462 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b39c23a2-492d-4401-bb73-6b4bfc849bec-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "b39c23a2-492d-4401-bb73-6b4bfc849bec" (UID: "b39c23a2-492d-4401-bb73-6b4bfc849bec"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677476 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-dev" (OuterVolumeSpecName: "dev") pod "b39c23a2-492d-4401-bb73-6b4bfc849bec" (UID: "b39c23a2-492d-4401-bb73-6b4bfc849bec"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677579 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677595 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677603 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/b39c23a2-492d-4401-bb73-6b4bfc849bec-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677613 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677624 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677632 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677640 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677648 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/b39c23a2-492d-4401-bb73-6b4bfc849bec-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.677609 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b39c23a2-492d-4401-bb73-6b4bfc849bec-logs" (OuterVolumeSpecName: "logs") pod "b39c23a2-492d-4401-bb73-6b4bfc849bec" (UID: "b39c23a2-492d-4401-bb73-6b4bfc849bec"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.682275 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b39c23a2-492d-4401-bb73-6b4bfc849bec-kube-api-access-b65vr" (OuterVolumeSpecName: "kube-api-access-b65vr") pod "b39c23a2-492d-4401-bb73-6b4bfc849bec" (UID: "b39c23a2-492d-4401-bb73-6b4bfc849bec"). InnerVolumeSpecName "kube-api-access-b65vr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.682381 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage18-crc" (OuterVolumeSpecName: "glance-cache") pod "b39c23a2-492d-4401-bb73-6b4bfc849bec" (UID: "b39c23a2-492d-4401-bb73-6b4bfc849bec"). InnerVolumeSpecName "local-storage18-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.683470 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage16-crc" (OuterVolumeSpecName: "glance") pod "b39c23a2-492d-4401-bb73-6b4bfc849bec" (UID: "b39c23a2-492d-4401-bb73-6b4bfc849bec"). InnerVolumeSpecName "local-storage16-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.685810 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39c23a2-492d-4401-bb73-6b4bfc849bec-scripts" (OuterVolumeSpecName: "scripts") pod "b39c23a2-492d-4401-bb73-6b4bfc849bec" (UID: "b39c23a2-492d-4401-bb73-6b4bfc849bec"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.723438 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b39c23a2-492d-4401-bb73-6b4bfc849bec-config-data" (OuterVolumeSpecName: "config-data") pod "b39c23a2-492d-4401-bb73-6b4bfc849bec" (UID: "b39c23a2-492d-4401-bb73-6b4bfc849bec"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.779533 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/b39c23a2-492d-4401-bb73-6b4bfc849bec-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.779569 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-b65vr\" (UniqueName: \"kubernetes.io/projected/b39c23a2-492d-4401-bb73-6b4bfc849bec-kube-api-access-b65vr\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.779607 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.779620 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" " Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.779630 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/b39c23a2-492d-4401-bb73-6b4bfc849bec-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.779646 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/b39c23a2-492d-4401-bb73-6b4bfc849bec-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.794536 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage18-crc" (UniqueName: "kubernetes.io/local-volume/local-storage18-crc") on node "crc" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.797979 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage16-crc" (UniqueName: "kubernetes.io/local-volume/local-storage16-crc") on node "crc" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.881268 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:04:59 crc kubenswrapper[4751]: I0131 15:04:59.881314 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.220308 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"b39c23a2-492d-4401-bb73-6b4bfc849bec","Type":"ContainerDied","Data":"995975e00b456601ec8a0bc0c21da9af3666fa6889c9406bb7f743ae54f464a2"} Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.220417 4751 scope.go:117] "RemoveContainer" containerID="0ccf08a2c3c9ee0e1a6fd7438c1d413c5265de24852c5a25306ee98ea0c2399b" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.222441 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.260630 4751 scope.go:117] "RemoveContainer" containerID="ade6d8caba0bf8e6958b676f72eece1bb73609f4cb6e8cdf0b7220751b79dcad" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.261765 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=6.261732636 podStartE2EDuration="6.261732636s" podCreationTimestamp="2026-01-31 15:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:05:00.247170082 +0000 UTC m=+1404.621882987" watchObservedRunningTime="2026-01-31 15:05:00.261732636 +0000 UTC m=+1404.636445561" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.280440 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.291485 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.308187 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:05:00 crc kubenswrapper[4751]: E0131 15:05:00.308796 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39c23a2-492d-4401-bb73-6b4bfc849bec" containerName="glance-httpd" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.308888 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39c23a2-492d-4401-bb73-6b4bfc849bec" containerName="glance-httpd" Jan 31 15:05:00 crc kubenswrapper[4751]: E0131 15:05:00.308979 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b39c23a2-492d-4401-bb73-6b4bfc849bec" containerName="glance-log" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.309053 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="b39c23a2-492d-4401-bb73-6b4bfc849bec" containerName="glance-log" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.309299 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b39c23a2-492d-4401-bb73-6b4bfc849bec" containerName="glance-log" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.309375 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="b39c23a2-492d-4401-bb73-6b4bfc849bec" containerName="glance-httpd" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.310260 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.321588 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.390472 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.390713 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-run\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.390865 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fada73df-4c18-4f18-9fcd-9fe24825a32c-logs\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.390968 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-sys\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.391121 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-dev\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.391183 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fada73df-4c18-4f18-9fcd-9fe24825a32c-scripts\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.391281 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fada73df-4c18-4f18-9fcd-9fe24825a32c-config-data\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.391339 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.391369 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.391433 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.391528 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.391691 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.391748 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fada73df-4c18-4f18-9fcd-9fe24825a32c-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.391771 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pmpk8\" (UniqueName: \"kubernetes.io/projected/fada73df-4c18-4f18-9fcd-9fe24825a32c-kube-api-access-pmpk8\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.418671 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b39c23a2-492d-4401-bb73-6b4bfc849bec" path="/var/lib/kubelet/pods/b39c23a2-492d-4401-bb73-6b4bfc849bec/volumes" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.493686 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-run\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.493930 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fada73df-4c18-4f18-9fcd-9fe24825a32c-logs\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.493772 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-run\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.493978 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-sys\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.493958 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-sys\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.494044 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-dev\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.494108 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fada73df-4c18-4f18-9fcd-9fe24825a32c-scripts\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.494178 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fada73df-4c18-4f18-9fcd-9fe24825a32c-config-data\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.494212 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.494246 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.494291 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.494345 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.494414 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.494447 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fada73df-4c18-4f18-9fcd-9fe24825a32c-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.494478 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pmpk8\" (UniqueName: \"kubernetes.io/projected/fada73df-4c18-4f18-9fcd-9fe24825a32c-kube-api-access-pmpk8\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.494479 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fada73df-4c18-4f18-9fcd-9fe24825a32c-logs\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.494634 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.494826 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-etc-nvme\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.494867 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-etc-iscsi\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.494899 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-dev\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.495344 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-var-locks-brick\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.495391 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-lib-modules\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.495415 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") device mount path \"/mnt/openstack/pv18\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.495450 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") device mount path \"/mnt/openstack/pv16\"" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.495763 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fada73df-4c18-4f18-9fcd-9fe24825a32c-httpd-run\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.499461 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fada73df-4c18-4f18-9fcd-9fe24825a32c-scripts\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.504374 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fada73df-4c18-4f18-9fcd-9fe24825a32c-config-data\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.514279 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pmpk8\" (UniqueName: \"kubernetes.io/projected/fada73df-4c18-4f18-9fcd-9fe24825a32c-kube-api-access-pmpk8\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.525121 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.539853 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"glance-default-internal-api-1\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:00 crc kubenswrapper[4751]: I0131 15:05:00.629125 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:01 crc kubenswrapper[4751]: I0131 15:05:01.105127 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:05:01 crc kubenswrapper[4751]: I0131 15:05:01.231924 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"fada73df-4c18-4f18-9fcd-9fe24825a32c","Type":"ContainerStarted","Data":"4e85dc9929fef5538bbedbabd1bc4862934d09f3a214bd5393c16cf9dbfd21f7"} Jan 31 15:05:02 crc kubenswrapper[4751]: I0131 15:05:02.245291 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"fada73df-4c18-4f18-9fcd-9fe24825a32c","Type":"ContainerStarted","Data":"0622c62b16c45cccd093df45438f4801ecb9dee67a405110fa2bd8152369a536"} Jan 31 15:05:02 crc kubenswrapper[4751]: I0131 15:05:02.246940 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"fada73df-4c18-4f18-9fcd-9fe24825a32c","Type":"ContainerStarted","Data":"4ac3b73f999bb2d4398e520fc413f3d3e17b891e2a007e2b9a8454c50571906c"} Jan 31 15:05:02 crc kubenswrapper[4751]: I0131 15:05:02.271296 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-1" podStartSLOduration=2.271277753 podStartE2EDuration="2.271277753s" podCreationTimestamp="2026-01-31 15:05:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:05:02.267598356 +0000 UTC m=+1406.642311261" watchObservedRunningTime="2026-01-31 15:05:02.271277753 +0000 UTC m=+1406.645990638" Jan 31 15:05:05 crc kubenswrapper[4751]: I0131 15:05:05.864403 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:05 crc kubenswrapper[4751]: I0131 15:05:05.864884 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:05 crc kubenswrapper[4751]: I0131 15:05:05.892731 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:05 crc kubenswrapper[4751]: I0131 15:05:05.916930 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:05 crc kubenswrapper[4751]: I0131 15:05:05.931497 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:05:05 crc kubenswrapper[4751]: I0131 15:05:05.931590 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:05:05 crc kubenswrapper[4751]: I0131 15:05:05.967195 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:05:05 crc kubenswrapper[4751]: I0131 15:05:05.980695 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:05:06 crc kubenswrapper[4751]: I0131 15:05:06.020879 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:06 crc kubenswrapper[4751]: I0131 15:05:06.020938 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:06 crc kubenswrapper[4751]: I0131 15:05:06.043773 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:06 crc kubenswrapper[4751]: I0131 15:05:06.068572 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:06 crc kubenswrapper[4751]: I0131 15:05:06.279879 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:06 crc kubenswrapper[4751]: I0131 15:05:06.279968 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:05:06 crc kubenswrapper[4751]: I0131 15:05:06.280305 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:05:06 crc kubenswrapper[4751]: I0131 15:05:06.280335 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:06 crc kubenswrapper[4751]: I0131 15:05:06.280347 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:06 crc kubenswrapper[4751]: I0131 15:05:06.280356 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:08 crc kubenswrapper[4751]: I0131 15:05:08.188134 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:08 crc kubenswrapper[4751]: I0131 15:05:08.194109 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:08 crc kubenswrapper[4751]: I0131 15:05:08.229908 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:08 crc kubenswrapper[4751]: I0131 15:05:08.230713 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:05:08 crc kubenswrapper[4751]: I0131 15:05:08.245634 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:08 crc kubenswrapper[4751]: I0131 15:05:08.248853 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:05:08 crc kubenswrapper[4751]: I0131 15:05:08.355944 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:05:08 crc kubenswrapper[4751]: I0131 15:05:08.896701 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:05:08 crc kubenswrapper[4751]: I0131 15:05:08.896751 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:05:10 crc kubenswrapper[4751]: I0131 15:05:10.321784 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="d1590321-a9e4-43b9-a9a0-04dc832b3332" containerName="glance-log" containerID="cri-o://efd3679f0fb441a01fa19571a9aa5ba37ff46c03bfa892f113c431e3a9022ffe" gracePeriod=30 Jan 31 15:05:10 crc kubenswrapper[4751]: I0131 15:05:10.321892 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="d1590321-a9e4-43b9-a9a0-04dc832b3332" containerName="glance-httpd" containerID="cri-o://79f170b0f270f110e5b5d5229988d518034d01da466dbe2c3254336ea700274a" gracePeriod=30 Jan 31 15:05:10 crc kubenswrapper[4751]: I0131 15:05:10.629880 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:10 crc kubenswrapper[4751]: I0131 15:05:10.629953 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:10 crc kubenswrapper[4751]: I0131 15:05:10.673503 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:10 crc kubenswrapper[4751]: I0131 15:05:10.702166 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:11 crc kubenswrapper[4751]: I0131 15:05:11.332564 4751 generic.go:334] "Generic (PLEG): container finished" podID="d1590321-a9e4-43b9-a9a0-04dc832b3332" containerID="efd3679f0fb441a01fa19571a9aa5ba37ff46c03bfa892f113c431e3a9022ffe" exitCode=143 Jan 31 15:05:11 crc kubenswrapper[4751]: I0131 15:05:11.332606 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"d1590321-a9e4-43b9-a9a0-04dc832b3332","Type":"ContainerDied","Data":"efd3679f0fb441a01fa19571a9aa5ba37ff46c03bfa892f113c431e3a9022ffe"} Jan 31 15:05:11 crc kubenswrapper[4751]: I0131 15:05:11.332868 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:11 crc kubenswrapper[4751]: I0131 15:05:11.332891 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.228877 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.231548 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.291724 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.292061 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="8309fec8-e5ee-4e23-8617-ab2e7ba833d6" containerName="glance-log" containerID="cri-o://2af1683a972bf70a8cfc2843e554280e84c5731cd7be66a108127c478a878dd2" gracePeriod=30 Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.292121 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="8309fec8-e5ee-4e23-8617-ab2e7ba833d6" containerName="glance-httpd" containerID="cri-o://0da059b6d1edf05d0ffe817ea6cbe4df21ba9242536bbbd9d3e7d479bd096546" gracePeriod=30 Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.820757 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.908936 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g9x4g\" (UniqueName: \"kubernetes.io/projected/d1590321-a9e4-43b9-a9a0-04dc832b3332-kube-api-access-g9x4g\") pod \"d1590321-a9e4-43b9-a9a0-04dc832b3332\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.908980 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-dev\") pod \"d1590321-a9e4-43b9-a9a0-04dc832b3332\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.909009 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1590321-a9e4-43b9-a9a0-04dc832b3332-logs\") pod \"d1590321-a9e4-43b9-a9a0-04dc832b3332\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.909026 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"d1590321-a9e4-43b9-a9a0-04dc832b3332\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.909048 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-etc-nvme\") pod \"d1590321-a9e4-43b9-a9a0-04dc832b3332\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.909098 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"d1590321-a9e4-43b9-a9a0-04dc832b3332\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.909149 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1590321-a9e4-43b9-a9a0-04dc832b3332-config-data\") pod \"d1590321-a9e4-43b9-a9a0-04dc832b3332\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.909245 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1590321-a9e4-43b9-a9a0-04dc832b3332-httpd-run\") pod \"d1590321-a9e4-43b9-a9a0-04dc832b3332\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.909293 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-etc-iscsi\") pod \"d1590321-a9e4-43b9-a9a0-04dc832b3332\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.909313 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-run\") pod \"d1590321-a9e4-43b9-a9a0-04dc832b3332\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.909333 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-sys\") pod \"d1590321-a9e4-43b9-a9a0-04dc832b3332\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.909363 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-var-locks-brick\") pod \"d1590321-a9e4-43b9-a9a0-04dc832b3332\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.909377 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-lib-modules\") pod \"d1590321-a9e4-43b9-a9a0-04dc832b3332\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.909396 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1590321-a9e4-43b9-a9a0-04dc832b3332-scripts\") pod \"d1590321-a9e4-43b9-a9a0-04dc832b3332\" (UID: \"d1590321-a9e4-43b9-a9a0-04dc832b3332\") " Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.910513 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-run" (OuterVolumeSpecName: "run") pod "d1590321-a9e4-43b9-a9a0-04dc832b3332" (UID: "d1590321-a9e4-43b9-a9a0-04dc832b3332"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.910600 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "d1590321-a9e4-43b9-a9a0-04dc832b3332" (UID: "d1590321-a9e4-43b9-a9a0-04dc832b3332"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.910612 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "d1590321-a9e4-43b9-a9a0-04dc832b3332" (UID: "d1590321-a9e4-43b9-a9a0-04dc832b3332"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.910635 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-sys" (OuterVolumeSpecName: "sys") pod "d1590321-a9e4-43b9-a9a0-04dc832b3332" (UID: "d1590321-a9e4-43b9-a9a0-04dc832b3332"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.910695 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "d1590321-a9e4-43b9-a9a0-04dc832b3332" (UID: "d1590321-a9e4-43b9-a9a0-04dc832b3332"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.910718 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "d1590321-a9e4-43b9-a9a0-04dc832b3332" (UID: "d1590321-a9e4-43b9-a9a0-04dc832b3332"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.910790 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-dev" (OuterVolumeSpecName: "dev") pod "d1590321-a9e4-43b9-a9a0-04dc832b3332" (UID: "d1590321-a9e4-43b9-a9a0-04dc832b3332"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.910869 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1590321-a9e4-43b9-a9a0-04dc832b3332-logs" (OuterVolumeSpecName: "logs") pod "d1590321-a9e4-43b9-a9a0-04dc832b3332" (UID: "d1590321-a9e4-43b9-a9a0-04dc832b3332"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.911073 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d1590321-a9e4-43b9-a9a0-04dc832b3332-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "d1590321-a9e4-43b9-a9a0-04dc832b3332" (UID: "d1590321-a9e4-43b9-a9a0-04dc832b3332"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.919236 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d1590321-a9e4-43b9-a9a0-04dc832b3332-kube-api-access-g9x4g" (OuterVolumeSpecName: "kube-api-access-g9x4g") pod "d1590321-a9e4-43b9-a9a0-04dc832b3332" (UID: "d1590321-a9e4-43b9-a9a0-04dc832b3332"). InnerVolumeSpecName "kube-api-access-g9x4g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.927981 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "d1590321-a9e4-43b9-a9a0-04dc832b3332" (UID: "d1590321-a9e4-43b9-a9a0-04dc832b3332"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.927997 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1590321-a9e4-43b9-a9a0-04dc832b3332-scripts" (OuterVolumeSpecName: "scripts") pod "d1590321-a9e4-43b9-a9a0-04dc832b3332" (UID: "d1590321-a9e4-43b9-a9a0-04dc832b3332"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.928178 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance-cache") pod "d1590321-a9e4-43b9-a9a0-04dc832b3332" (UID: "d1590321-a9e4-43b9-a9a0-04dc832b3332"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:05:13 crc kubenswrapper[4751]: I0131 15:05:13.956931 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d1590321-a9e4-43b9-a9a0-04dc832b3332-config-data" (OuterVolumeSpecName: "config-data") pod "d1590321-a9e4-43b9-a9a0-04dc832b3332" (UID: "d1590321-a9e4-43b9-a9a0-04dc832b3332"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.010773 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/d1590321-a9e4-43b9-a9a0-04dc832b3332-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.010813 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.010830 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.010842 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.010853 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.010866 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.010876 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/d1590321-a9e4-43b9-a9a0-04dc832b3332-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.010899 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g9x4g\" (UniqueName: \"kubernetes.io/projected/d1590321-a9e4-43b9-a9a0-04dc832b3332-kube-api-access-g9x4g\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.010912 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.010922 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/d1590321-a9e4-43b9-a9a0-04dc832b3332-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.010953 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.010964 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/d1590321-a9e4-43b9-a9a0-04dc832b3332-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.010980 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.010992 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/d1590321-a9e4-43b9-a9a0-04dc832b3332-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.037973 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.044155 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.112813 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.112854 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.364393 4751 generic.go:334] "Generic (PLEG): container finished" podID="d1590321-a9e4-43b9-a9a0-04dc832b3332" containerID="79f170b0f270f110e5b5d5229988d518034d01da466dbe2c3254336ea700274a" exitCode=0 Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.364449 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"d1590321-a9e4-43b9-a9a0-04dc832b3332","Type":"ContainerDied","Data":"79f170b0f270f110e5b5d5229988d518034d01da466dbe2c3254336ea700274a"} Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.364504 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"d1590321-a9e4-43b9-a9a0-04dc832b3332","Type":"ContainerDied","Data":"74ea627d68211382ddcd6045fc064247fa1aa0e13ee2656765ee16aee6d9bb61"} Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.364522 4751 scope.go:117] "RemoveContainer" containerID="79f170b0f270f110e5b5d5229988d518034d01da466dbe2c3254336ea700274a" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.364467 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.367570 4751 generic.go:334] "Generic (PLEG): container finished" podID="8309fec8-e5ee-4e23-8617-ab2e7ba833d6" containerID="2af1683a972bf70a8cfc2843e554280e84c5731cd7be66a108127c478a878dd2" exitCode=143 Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.369025 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"8309fec8-e5ee-4e23-8617-ab2e7ba833d6","Type":"ContainerDied","Data":"2af1683a972bf70a8cfc2843e554280e84c5731cd7be66a108127c478a878dd2"} Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.388084 4751 scope.go:117] "RemoveContainer" containerID="efd3679f0fb441a01fa19571a9aa5ba37ff46c03bfa892f113c431e3a9022ffe" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.410417 4751 scope.go:117] "RemoveContainer" containerID="79f170b0f270f110e5b5d5229988d518034d01da466dbe2c3254336ea700274a" Jan 31 15:05:14 crc kubenswrapper[4751]: E0131 15:05:14.410809 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79f170b0f270f110e5b5d5229988d518034d01da466dbe2c3254336ea700274a\": container with ID starting with 79f170b0f270f110e5b5d5229988d518034d01da466dbe2c3254336ea700274a not found: ID does not exist" containerID="79f170b0f270f110e5b5d5229988d518034d01da466dbe2c3254336ea700274a" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.410846 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79f170b0f270f110e5b5d5229988d518034d01da466dbe2c3254336ea700274a"} err="failed to get container status \"79f170b0f270f110e5b5d5229988d518034d01da466dbe2c3254336ea700274a\": rpc error: code = NotFound desc = could not find container \"79f170b0f270f110e5b5d5229988d518034d01da466dbe2c3254336ea700274a\": container with ID starting with 79f170b0f270f110e5b5d5229988d518034d01da466dbe2c3254336ea700274a not found: ID does not exist" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.410874 4751 scope.go:117] "RemoveContainer" containerID="efd3679f0fb441a01fa19571a9aa5ba37ff46c03bfa892f113c431e3a9022ffe" Jan 31 15:05:14 crc kubenswrapper[4751]: E0131 15:05:14.411347 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"efd3679f0fb441a01fa19571a9aa5ba37ff46c03bfa892f113c431e3a9022ffe\": container with ID starting with efd3679f0fb441a01fa19571a9aa5ba37ff46c03bfa892f113c431e3a9022ffe not found: ID does not exist" containerID="efd3679f0fb441a01fa19571a9aa5ba37ff46c03bfa892f113c431e3a9022ffe" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.411374 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"efd3679f0fb441a01fa19571a9aa5ba37ff46c03bfa892f113c431e3a9022ffe"} err="failed to get container status \"efd3679f0fb441a01fa19571a9aa5ba37ff46c03bfa892f113c431e3a9022ffe\": rpc error: code = NotFound desc = could not find container \"efd3679f0fb441a01fa19571a9aa5ba37ff46c03bfa892f113c431e3a9022ffe\": container with ID starting with efd3679f0fb441a01fa19571a9aa5ba37ff46c03bfa892f113c431e3a9022ffe not found: ID does not exist" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.429206 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.436772 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.466511 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:05:14 crc kubenswrapper[4751]: E0131 15:05:14.466902 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1590321-a9e4-43b9-a9a0-04dc832b3332" containerName="glance-log" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.466921 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1590321-a9e4-43b9-a9a0-04dc832b3332" containerName="glance-log" Jan 31 15:05:14 crc kubenswrapper[4751]: E0131 15:05:14.466962 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d1590321-a9e4-43b9-a9a0-04dc832b3332" containerName="glance-httpd" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.466971 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d1590321-a9e4-43b9-a9a0-04dc832b3332" containerName="glance-httpd" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.467146 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1590321-a9e4-43b9-a9a0-04dc832b3332" containerName="glance-log" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.467166 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d1590321-a9e4-43b9-a9a0-04dc832b3332" containerName="glance-httpd" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.468047 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.473910 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.621891 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-dev\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.622261 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90af064c-9d0a-4818-8e19-c87da44a879b-scripts\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.622301 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.622329 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-sys\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.622437 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90af064c-9d0a-4818-8e19-c87da44a879b-logs\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.622736 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/90af064c-9d0a-4818-8e19-c87da44a879b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.622783 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.623006 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.623040 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.623287 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rwj7s\" (UniqueName: \"kubernetes.io/projected/90af064c-9d0a-4818-8e19-c87da44a879b-kube-api-access-rwj7s\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.623320 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-run\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.623365 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.623405 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.623435 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90af064c-9d0a-4818-8e19-c87da44a879b-config-data\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725275 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rwj7s\" (UniqueName: \"kubernetes.io/projected/90af064c-9d0a-4818-8e19-c87da44a879b-kube-api-access-rwj7s\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725346 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-run\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725398 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725435 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725468 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-run\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725480 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90af064c-9d0a-4818-8e19-c87da44a879b-config-data\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725571 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-dev\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725593 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-var-locks-brick\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725604 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90af064c-9d0a-4818-8e19-c87da44a879b-scripts\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725696 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-dev\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725706 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725743 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-etc-iscsi\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725767 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") device mount path \"/mnt/openstack/pv02\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725803 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-sys\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725772 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-sys\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725857 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90af064c-9d0a-4818-8e19-c87da44a879b-logs\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725935 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/90af064c-9d0a-4818-8e19-c87da44a879b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.725991 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.726042 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.726110 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.726228 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") device mount path \"/mnt/openstack/pv05\"" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.726265 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-lib-modules\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.726341 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-etc-nvme\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.726386 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/90af064c-9d0a-4818-8e19-c87da44a879b-httpd-run\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.726605 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90af064c-9d0a-4818-8e19-c87da44a879b-logs\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.731096 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90af064c-9d0a-4818-8e19-c87da44a879b-scripts\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.732664 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90af064c-9d0a-4818-8e19-c87da44a879b-config-data\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.754006 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.758476 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rwj7s\" (UniqueName: \"kubernetes.io/projected/90af064c-9d0a-4818-8e19-c87da44a879b-kube-api-access-rwj7s\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.777615 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"glance-default-external-api-0\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:14 crc kubenswrapper[4751]: I0131 15:05:14.789938 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:15 crc kubenswrapper[4751]: I0131 15:05:15.216993 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:05:15 crc kubenswrapper[4751]: I0131 15:05:15.378479 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"90af064c-9d0a-4818-8e19-c87da44a879b","Type":"ContainerStarted","Data":"8b807b8111d0f0f33aee37200b1958ff47b158280257c5ed2833cf9c5c3a286f"} Jan 31 15:05:15 crc kubenswrapper[4751]: I0131 15:05:15.378876 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"90af064c-9d0a-4818-8e19-c87da44a879b","Type":"ContainerStarted","Data":"e99fc8a548b6e6e8e6da564fb55696f96c325bf5ca3500bbda2b1e9e31f7bf04"} Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.390593 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"90af064c-9d0a-4818-8e19-c87da44a879b","Type":"ContainerStarted","Data":"e0cc5d1490293994ec3b55b9f608e90b08473604cf0ebba39024c27ee8d6005b"} Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.419024 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d1590321-a9e4-43b9-a9a0-04dc832b3332" path="/var/lib/kubelet/pods/d1590321-a9e4-43b9-a9a0-04dc832b3332/volumes" Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.439922 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-external-api-0" podStartSLOduration=2.439902557 podStartE2EDuration="2.439902557s" podCreationTimestamp="2026-01-31 15:05:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:05:16.423111004 +0000 UTC m=+1420.797823889" watchObservedRunningTime="2026-01-31 15:05:16.439902557 +0000 UTC m=+1420.814615452" Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.864605 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.964893 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-logs\") pod \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.964938 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.964968 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-sys\") pod \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965034 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-config-data\") pod \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965062 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-lib-modules\") pod \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965112 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-etc-nvme\") pod \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965166 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-httpd-run\") pod \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965187 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-run\") pod \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965209 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mj5bx\" (UniqueName: \"kubernetes.io/projected/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-kube-api-access-mj5bx\") pod \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965252 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-scripts\") pod \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965279 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965316 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-etc-iscsi\") pod \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965353 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-var-locks-brick\") pod \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965391 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-dev\") pod \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\" (UID: \"8309fec8-e5ee-4e23-8617-ab2e7ba833d6\") " Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965795 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-dev" (OuterVolumeSpecName: "dev") pod "8309fec8-e5ee-4e23-8617-ab2e7ba833d6" (UID: "8309fec8-e5ee-4e23-8617-ab2e7ba833d6"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965835 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "8309fec8-e5ee-4e23-8617-ab2e7ba833d6" (UID: "8309fec8-e5ee-4e23-8617-ab2e7ba833d6"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965835 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-sys" (OuterVolumeSpecName: "sys") pod "8309fec8-e5ee-4e23-8617-ab2e7ba833d6" (UID: "8309fec8-e5ee-4e23-8617-ab2e7ba833d6"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965877 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "8309fec8-e5ee-4e23-8617-ab2e7ba833d6" (UID: "8309fec8-e5ee-4e23-8617-ab2e7ba833d6"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965907 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "8309fec8-e5ee-4e23-8617-ab2e7ba833d6" (UID: "8309fec8-e5ee-4e23-8617-ab2e7ba833d6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965950 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "8309fec8-e5ee-4e23-8617-ab2e7ba833d6" (UID: "8309fec8-e5ee-4e23-8617-ab2e7ba833d6"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.965953 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-run" (OuterVolumeSpecName: "run") pod "8309fec8-e5ee-4e23-8617-ab2e7ba833d6" (UID: "8309fec8-e5ee-4e23-8617-ab2e7ba833d6"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.966119 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "8309fec8-e5ee-4e23-8617-ab2e7ba833d6" (UID: "8309fec8-e5ee-4e23-8617-ab2e7ba833d6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.966193 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-logs" (OuterVolumeSpecName: "logs") pod "8309fec8-e5ee-4e23-8617-ab2e7ba833d6" (UID: "8309fec8-e5ee-4e23-8617-ab2e7ba833d6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.971153 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance-cache") pod "8309fec8-e5ee-4e23-8617-ab2e7ba833d6" (UID: "8309fec8-e5ee-4e23-8617-ab2e7ba833d6"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.972202 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-scripts" (OuterVolumeSpecName: "scripts") pod "8309fec8-e5ee-4e23-8617-ab2e7ba833d6" (UID: "8309fec8-e5ee-4e23-8617-ab2e7ba833d6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.973491 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-kube-api-access-mj5bx" (OuterVolumeSpecName: "kube-api-access-mj5bx") pod "8309fec8-e5ee-4e23-8617-ab2e7ba833d6" (UID: "8309fec8-e5ee-4e23-8617-ab2e7ba833d6"). InnerVolumeSpecName "kube-api-access-mj5bx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:05:16 crc kubenswrapper[4751]: I0131 15:05:16.982591 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage15-crc" (OuterVolumeSpecName: "glance") pod "8309fec8-e5ee-4e23-8617-ab2e7ba833d6" (UID: "8309fec8-e5ee-4e23-8617-ab2e7ba833d6"). InnerVolumeSpecName "local-storage15-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.022200 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-config-data" (OuterVolumeSpecName: "config-data") pod "8309fec8-e5ee-4e23-8617-ab2e7ba833d6" (UID: "8309fec8-e5ee-4e23-8617-ab2e7ba833d6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.067601 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.067635 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.067651 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mj5bx\" (UniqueName: \"kubernetes.io/projected/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-kube-api-access-mj5bx\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.067686 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.067720 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.067731 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.067740 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.067749 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.067758 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.067771 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" " Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.067779 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.067787 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.067795 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.067803 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/8309fec8-e5ee-4e23-8617-ab2e7ba833d6-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.083408 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage15-crc" (UniqueName: "kubernetes.io/local-volume/local-storage15-crc") on node "crc" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.086786 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.168884 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.168916 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.399163 4751 generic.go:334] "Generic (PLEG): container finished" podID="8309fec8-e5ee-4e23-8617-ab2e7ba833d6" containerID="0da059b6d1edf05d0ffe817ea6cbe4df21ba9242536bbbd9d3e7d479bd096546" exitCode=0 Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.399260 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.399253 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"8309fec8-e5ee-4e23-8617-ab2e7ba833d6","Type":"ContainerDied","Data":"0da059b6d1edf05d0ffe817ea6cbe4df21ba9242536bbbd9d3e7d479bd096546"} Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.399330 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"8309fec8-e5ee-4e23-8617-ab2e7ba833d6","Type":"ContainerDied","Data":"dd5644a60c6d37de5875344e64047668e5f4f00160a8166735d8995e0b9a3862"} Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.399355 4751 scope.go:117] "RemoveContainer" containerID="0da059b6d1edf05d0ffe817ea6cbe4df21ba9242536bbbd9d3e7d479bd096546" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.435968 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.445234 4751 scope.go:117] "RemoveContainer" containerID="2af1683a972bf70a8cfc2843e554280e84c5731cd7be66a108127c478a878dd2" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.448634 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.463190 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:05:17 crc kubenswrapper[4751]: E0131 15:05:17.463616 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8309fec8-e5ee-4e23-8617-ab2e7ba833d6" containerName="glance-log" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.463634 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8309fec8-e5ee-4e23-8617-ab2e7ba833d6" containerName="glance-log" Jan 31 15:05:17 crc kubenswrapper[4751]: E0131 15:05:17.463659 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8309fec8-e5ee-4e23-8617-ab2e7ba833d6" containerName="glance-httpd" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.463664 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8309fec8-e5ee-4e23-8617-ab2e7ba833d6" containerName="glance-httpd" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.463801 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8309fec8-e5ee-4e23-8617-ab2e7ba833d6" containerName="glance-log" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.463813 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8309fec8-e5ee-4e23-8617-ab2e7ba833d6" containerName="glance-httpd" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.464687 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.472022 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.472305 4751 scope.go:117] "RemoveContainer" containerID="0da059b6d1edf05d0ffe817ea6cbe4df21ba9242536bbbd9d3e7d479bd096546" Jan 31 15:05:17 crc kubenswrapper[4751]: E0131 15:05:17.480240 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0da059b6d1edf05d0ffe817ea6cbe4df21ba9242536bbbd9d3e7d479bd096546\": container with ID starting with 0da059b6d1edf05d0ffe817ea6cbe4df21ba9242536bbbd9d3e7d479bd096546 not found: ID does not exist" containerID="0da059b6d1edf05d0ffe817ea6cbe4df21ba9242536bbbd9d3e7d479bd096546" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.480299 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0da059b6d1edf05d0ffe817ea6cbe4df21ba9242536bbbd9d3e7d479bd096546"} err="failed to get container status \"0da059b6d1edf05d0ffe817ea6cbe4df21ba9242536bbbd9d3e7d479bd096546\": rpc error: code = NotFound desc = could not find container \"0da059b6d1edf05d0ffe817ea6cbe4df21ba9242536bbbd9d3e7d479bd096546\": container with ID starting with 0da059b6d1edf05d0ffe817ea6cbe4df21ba9242536bbbd9d3e7d479bd096546 not found: ID does not exist" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.480332 4751 scope.go:117] "RemoveContainer" containerID="2af1683a972bf70a8cfc2843e554280e84c5731cd7be66a108127c478a878dd2" Jan 31 15:05:17 crc kubenswrapper[4751]: E0131 15:05:17.483126 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2af1683a972bf70a8cfc2843e554280e84c5731cd7be66a108127c478a878dd2\": container with ID starting with 2af1683a972bf70a8cfc2843e554280e84c5731cd7be66a108127c478a878dd2 not found: ID does not exist" containerID="2af1683a972bf70a8cfc2843e554280e84c5731cd7be66a108127c478a878dd2" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.483184 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2af1683a972bf70a8cfc2843e554280e84c5731cd7be66a108127c478a878dd2"} err="failed to get container status \"2af1683a972bf70a8cfc2843e554280e84c5731cd7be66a108127c478a878dd2\": rpc error: code = NotFound desc = could not find container \"2af1683a972bf70a8cfc2843e554280e84c5731cd7be66a108127c478a878dd2\": container with ID starting with 2af1683a972bf70a8cfc2843e554280e84c5731cd7be66a108127c478a878dd2 not found: ID does not exist" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.575243 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0e8efba-9adc-482b-bd77-553d76648ac6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.575509 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-dev\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.575583 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0e8efba-9adc-482b-bd77-553d76648ac6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.575645 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.575770 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.575859 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0e8efba-9adc-482b-bd77-553d76648ac6-logs\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.575927 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l9kwf\" (UniqueName: \"kubernetes.io/projected/e0e8efba-9adc-482b-bd77-553d76648ac6-kube-api-access-l9kwf\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.576019 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-sys\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.576109 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.576184 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0e8efba-9adc-482b-bd77-553d76648ac6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.576248 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-run\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.576338 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.576554 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.576707 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.681212 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.681267 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.681302 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.681323 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0e8efba-9adc-482b-bd77-553d76648ac6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.681358 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-dev\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.681376 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0e8efba-9adc-482b-bd77-553d76648ac6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.681391 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.681419 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.681446 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0e8efba-9adc-482b-bd77-553d76648ac6-logs\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.681445 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") device mount path \"/mnt/openstack/pv15\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.681495 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-etc-nvme\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.681800 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-var-locks-brick\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.681833 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-etc-iscsi\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.682237 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0e8efba-9adc-482b-bd77-553d76648ac6-logs\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.682487 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0e8efba-9adc-482b-bd77-553d76648ac6-httpd-run\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.682527 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-lib-modules\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.682549 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-dev\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.681462 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-l9kwf\" (UniqueName: \"kubernetes.io/projected/e0e8efba-9adc-482b-bd77-553d76648ac6-kube-api-access-l9kwf\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.682586 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-sys\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.682605 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.682621 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-run\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.682635 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0e8efba-9adc-482b-bd77-553d76648ac6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.684735 4751 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") device mount path \"/mnt/openstack/pv08\"" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.685265 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-run\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.685331 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-sys\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.689950 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0e8efba-9adc-482b-bd77-553d76648ac6-scripts\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.690570 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0e8efba-9adc-482b-bd77-553d76648ac6-config-data\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.701863 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-l9kwf\" (UniqueName: \"kubernetes.io/projected/e0e8efba-9adc-482b-bd77-553d76648ac6-kube-api-access-l9kwf\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.716847 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.717703 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"glance-default-internal-api-0\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:17 crc kubenswrapper[4751]: I0131 15:05:17.807866 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:18 crc kubenswrapper[4751]: I0131 15:05:18.233848 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:05:18 crc kubenswrapper[4751]: I0131 15:05:18.415261 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8309fec8-e5ee-4e23-8617-ab2e7ba833d6" path="/var/lib/kubelet/pods/8309fec8-e5ee-4e23-8617-ab2e7ba833d6/volumes" Jan 31 15:05:18 crc kubenswrapper[4751]: I0131 15:05:18.417774 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"e0e8efba-9adc-482b-bd77-553d76648ac6","Type":"ContainerStarted","Data":"de4d5808fc4ba7308f826962b8650c6ce882dcfa84a2b9961ed782c3d596f76e"} Jan 31 15:05:18 crc kubenswrapper[4751]: I0131 15:05:18.417813 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"e0e8efba-9adc-482b-bd77-553d76648ac6","Type":"ContainerStarted","Data":"8b49f74873882c94e30e439215c7b1269126be109dcab9f528966ad2a1118a0c"} Jan 31 15:05:19 crc kubenswrapper[4751]: I0131 15:05:19.424573 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"e0e8efba-9adc-482b-bd77-553d76648ac6","Type":"ContainerStarted","Data":"90a6f0dd1552854347833452de54355b3cea39a33ffea2db5092440b501dadb7"} Jan 31 15:05:19 crc kubenswrapper[4751]: I0131 15:05:19.452919 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glance-default-internal-api-0" podStartSLOduration=2.45289148 podStartE2EDuration="2.45289148s" podCreationTimestamp="2026-01-31 15:05:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:05:19.445580407 +0000 UTC m=+1423.820293292" watchObservedRunningTime="2026-01-31 15:05:19.45289148 +0000 UTC m=+1423.827604405" Jan 31 15:05:24 crc kubenswrapper[4751]: I0131 15:05:24.790964 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:24 crc kubenswrapper[4751]: I0131 15:05:24.793292 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:24 crc kubenswrapper[4751]: I0131 15:05:24.821697 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:24 crc kubenswrapper[4751]: I0131 15:05:24.829622 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:25 crc kubenswrapper[4751]: I0131 15:05:25.467498 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:25 crc kubenswrapper[4751]: I0131 15:05:25.467549 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:27 crc kubenswrapper[4751]: I0131 15:05:27.342475 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:27 crc kubenswrapper[4751]: I0131 15:05:27.349355 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:05:27 crc kubenswrapper[4751]: I0131 15:05:27.809100 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:27 crc kubenswrapper[4751]: I0131 15:05:27.809151 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:27 crc kubenswrapper[4751]: I0131 15:05:27.837827 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:27 crc kubenswrapper[4751]: I0131 15:05:27.848461 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:28 crc kubenswrapper[4751]: I0131 15:05:28.492563 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:28 crc kubenswrapper[4751]: I0131 15:05:28.492614 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:30 crc kubenswrapper[4751]: I0131 15:05:30.326711 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:30 crc kubenswrapper[4751]: I0131 15:05:30.330216 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:05:38 crc kubenswrapper[4751]: I0131 15:05:38.896942 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:05:38 crc kubenswrapper[4751]: I0131 15:05:38.897343 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:05:45 crc kubenswrapper[4751]: I0131 15:05:45.977094 4751 scope.go:117] "RemoveContainer" containerID="53e4421364bd50f8121a14bb4c3e20cbc7c5ba08c19bdb68ee47f37ac2b94308" Jan 31 15:05:46 crc kubenswrapper[4751]: I0131 15:05:46.000810 4751 scope.go:117] "RemoveContainer" containerID="4d83615719bc342a748610c14a746dcc08356aa04afe079d5f96b964e25ed0f6" Jan 31 15:05:46 crc kubenswrapper[4751]: I0131 15:05:46.073429 4751 scope.go:117] "RemoveContainer" containerID="4157453b55a598117cf21c7e58fec8625fe386b3472f188272985dac7429ad14" Jan 31 15:06:05 crc kubenswrapper[4751]: I0131 15:06:05.074237 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/root-account-create-update-4gxnx"] Jan 31 15:06:05 crc kubenswrapper[4751]: I0131 15:06:05.081541 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/root-account-create-update-4gxnx"] Jan 31 15:06:05 crc kubenswrapper[4751]: I0131 15:06:05.703962 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:06:05 crc kubenswrapper[4751]: I0131 15:06:05.704328 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="2ce40f98-80af-4a4b-8556-c5c7dd84fc58" containerName="glance-httpd" containerID="cri-o://5b307a953bf028387f69f75e58e1545d2c2c38f81096b44abf4639237badb848" gracePeriod=30 Jan 31 15:06:05 crc kubenswrapper[4751]: I0131 15:06:05.704304 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-1" podUID="2ce40f98-80af-4a4b-8556-c5c7dd84fc58" containerName="glance-log" containerID="cri-o://e83e3eee6d1bd6eeb18c2480a68530a6737d3f76fac28ccff0e74d019e406b35" gracePeriod=30 Jan 31 15:06:05 crc kubenswrapper[4751]: I0131 15:06:05.847863 4751 generic.go:334] "Generic (PLEG): container finished" podID="2ce40f98-80af-4a4b-8556-c5c7dd84fc58" containerID="e83e3eee6d1bd6eeb18c2480a68530a6737d3f76fac28ccff0e74d019e406b35" exitCode=143 Jan 31 15:06:05 crc kubenswrapper[4751]: I0131 15:06:05.847903 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2ce40f98-80af-4a4b-8556-c5c7dd84fc58","Type":"ContainerDied","Data":"e83e3eee6d1bd6eeb18c2480a68530a6737d3f76fac28ccff0e74d019e406b35"} Jan 31 15:06:05 crc kubenswrapper[4751]: I0131 15:06:05.867611 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:06:05 crc kubenswrapper[4751]: I0131 15:06:05.867862 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="fada73df-4c18-4f18-9fcd-9fe24825a32c" containerName="glance-log" containerID="cri-o://4ac3b73f999bb2d4398e520fc413f3d3e17b891e2a007e2b9a8454c50571906c" gracePeriod=30 Jan 31 15:06:05 crc kubenswrapper[4751]: I0131 15:06:05.868296 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-1" podUID="fada73df-4c18-4f18-9fcd-9fe24825a32c" containerName="glance-httpd" containerID="cri-o://0622c62b16c45cccd093df45438f4801ecb9dee67a405110fa2bd8152369a536" gracePeriod=30 Jan 31 15:06:06 crc kubenswrapper[4751]: I0131 15:06:06.420816 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf" path="/var/lib/kubelet/pods/6b6442aa-bbe5-4658-bfe9-28ee4f9b3daf/volumes" Jan 31 15:06:06 crc kubenswrapper[4751]: I0131 15:06:06.858872 4751 generic.go:334] "Generic (PLEG): container finished" podID="fada73df-4c18-4f18-9fcd-9fe24825a32c" containerID="4ac3b73f999bb2d4398e520fc413f3d3e17b891e2a007e2b9a8454c50571906c" exitCode=143 Jan 31 15:06:06 crc kubenswrapper[4751]: I0131 15:06:06.858928 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"fada73df-4c18-4f18-9fcd-9fe24825a32c","Type":"ContainerDied","Data":"4ac3b73f999bb2d4398e520fc413f3d3e17b891e2a007e2b9a8454c50571906c"} Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.086949 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-sync-96ldw"] Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.095941 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-sync-96ldw"] Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.154732 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.154999 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="e0e8efba-9adc-482b-bd77-553d76648ac6" containerName="glance-log" containerID="cri-o://de4d5808fc4ba7308f826962b8650c6ce882dcfa84a2b9961ed782c3d596f76e" gracePeriod=30 Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.155682 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-internal-api-0" podUID="e0e8efba-9adc-482b-bd77-553d76648ac6" containerName="glance-httpd" containerID="cri-o://90a6f0dd1552854347833452de54355b3cea39a33ffea2db5092440b501dadb7" gracePeriod=30 Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.166511 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/glancefb75-account-delete-rcplg"] Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.167693 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancefb75-account-delete-rcplg" Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.173602 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancefb75-account-delete-rcplg"] Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.185361 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09ca3bf6-027a-4e7b-a142-44ad4308fd3e-operator-scripts\") pod \"glancefb75-account-delete-rcplg\" (UID: \"09ca3bf6-027a-4e7b-a142-44ad4308fd3e\") " pod="glance-kuttl-tests/glancefb75-account-delete-rcplg" Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.185432 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6x4fc\" (UniqueName: \"kubernetes.io/projected/09ca3bf6-027a-4e7b-a142-44ad4308fd3e-kube-api-access-6x4fc\") pod \"glancefb75-account-delete-rcplg\" (UID: \"09ca3bf6-027a-4e7b-a142-44ad4308fd3e\") " pod="glance-kuttl-tests/glancefb75-account-delete-rcplg" Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.225388 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.225672 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="90af064c-9d0a-4818-8e19-c87da44a879b" containerName="glance-log" containerID="cri-o://8b807b8111d0f0f33aee37200b1958ff47b158280257c5ed2833cf9c5c3a286f" gracePeriod=30 Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.226159 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/glance-default-external-api-0" podUID="90af064c-9d0a-4818-8e19-c87da44a879b" containerName="glance-httpd" containerID="cri-o://e0cc5d1490293994ec3b55b9f608e90b08473604cf0ebba39024c27ee8d6005b" gracePeriod=30 Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.287111 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09ca3bf6-027a-4e7b-a142-44ad4308fd3e-operator-scripts\") pod \"glancefb75-account-delete-rcplg\" (UID: \"09ca3bf6-027a-4e7b-a142-44ad4308fd3e\") " pod="glance-kuttl-tests/glancefb75-account-delete-rcplg" Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.287206 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6x4fc\" (UniqueName: \"kubernetes.io/projected/09ca3bf6-027a-4e7b-a142-44ad4308fd3e-kube-api-access-6x4fc\") pod \"glancefb75-account-delete-rcplg\" (UID: \"09ca3bf6-027a-4e7b-a142-44ad4308fd3e\") " pod="glance-kuttl-tests/glancefb75-account-delete-rcplg" Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.288456 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09ca3bf6-027a-4e7b-a142-44ad4308fd3e-operator-scripts\") pod \"glancefb75-account-delete-rcplg\" (UID: \"09ca3bf6-027a-4e7b-a142-44ad4308fd3e\") " pod="glance-kuttl-tests/glancefb75-account-delete-rcplg" Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.314943 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6x4fc\" (UniqueName: \"kubernetes.io/projected/09ca3bf6-027a-4e7b-a142-44ad4308fd3e-kube-api-access-6x4fc\") pod \"glancefb75-account-delete-rcplg\" (UID: \"09ca3bf6-027a-4e7b-a142-44ad4308fd3e\") " pod="glance-kuttl-tests/glancefb75-account-delete-rcplg" Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.485229 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancefb75-account-delete-rcplg" Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.730830 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/glancefb75-account-delete-rcplg"] Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.868279 4751 generic.go:334] "Generic (PLEG): container finished" podID="90af064c-9d0a-4818-8e19-c87da44a879b" containerID="8b807b8111d0f0f33aee37200b1958ff47b158280257c5ed2833cf9c5c3a286f" exitCode=143 Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.868352 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"90af064c-9d0a-4818-8e19-c87da44a879b","Type":"ContainerDied","Data":"8b807b8111d0f0f33aee37200b1958ff47b158280257c5ed2833cf9c5c3a286f"} Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.871939 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0e8efba-9adc-482b-bd77-553d76648ac6" containerID="de4d5808fc4ba7308f826962b8650c6ce882dcfa84a2b9961ed782c3d596f76e" exitCode=143 Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.871998 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"e0e8efba-9adc-482b-bd77-553d76648ac6","Type":"ContainerDied","Data":"de4d5808fc4ba7308f826962b8650c6ce882dcfa84a2b9961ed782c3d596f76e"} Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.873303 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancefb75-account-delete-rcplg" event={"ID":"09ca3bf6-027a-4e7b-a142-44ad4308fd3e","Type":"ContainerStarted","Data":"f49a7bb7647ca08427d7a0e06c9d600f223665df0c37dd55d0d845016d321065"} Jan 31 15:06:07 crc kubenswrapper[4751]: I0131 15:06:07.889121 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/glancefb75-account-delete-rcplg" podStartSLOduration=0.889106287 podStartE2EDuration="889.106287ms" podCreationTimestamp="2026-01-31 15:06:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:06:07.884921346 +0000 UTC m=+1472.259634231" watchObservedRunningTime="2026-01-31 15:06:07.889106287 +0000 UTC m=+1472.263819162" Jan 31 15:06:08 crc kubenswrapper[4751]: I0131 15:06:08.413250 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="837cbdc7-6443-4789-a796-c2f1bd79119d" path="/var/lib/kubelet/pods/837cbdc7-6443-4789-a796-c2f1bd79119d/volumes" Jan 31 15:06:08 crc kubenswrapper[4751]: I0131 15:06:08.884381 4751 generic.go:334] "Generic (PLEG): container finished" podID="09ca3bf6-027a-4e7b-a142-44ad4308fd3e" containerID="d083c8910ad51529e38c062770d9b1a45be20502e7c76553d0090ac7a9898be5" exitCode=0 Jan 31 15:06:08 crc kubenswrapper[4751]: I0131 15:06:08.884446 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancefb75-account-delete-rcplg" event={"ID":"09ca3bf6-027a-4e7b-a142-44ad4308fd3e","Type":"ContainerDied","Data":"d083c8910ad51529e38c062770d9b1a45be20502e7c76553d0090ac7a9898be5"} Jan 31 15:06:08 crc kubenswrapper[4751]: I0131 15:06:08.896370 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:06:08 crc kubenswrapper[4751]: I0131 15:06:08.896460 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:06:08 crc kubenswrapper[4751]: I0131 15:06:08.896517 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 15:06:08 crc kubenswrapper[4751]: I0131 15:06:08.897807 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"89a88ddaeae8a6fe7859be79e45bc66e157a0d02a03f5daf69e0ab6320bd15be"} pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:06:08 crc kubenswrapper[4751]: I0131 15:06:08.897905 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" containerID="cri-o://89a88ddaeae8a6fe7859be79e45bc66e157a0d02a03f5daf69e0ab6320bd15be" gracePeriod=600 Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.218807 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.415670 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") pod \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.415746 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-etc-nvme\") pod \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.415796 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-sys\") pod \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.415811 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-run\") pod \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.415900 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-sys" (OuterVolumeSpecName: "sys") pod "2ce40f98-80af-4a4b-8556-c5c7dd84fc58" (UID: "2ce40f98-80af-4a4b-8556-c5c7dd84fc58"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.415916 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "2ce40f98-80af-4a4b-8556-c5c7dd84fc58" (UID: "2ce40f98-80af-4a4b-8556-c5c7dd84fc58"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.415961 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-logs\") pod \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.415982 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-run" (OuterVolumeSpecName: "run") pod "2ce40f98-80af-4a4b-8556-c5c7dd84fc58" (UID: "2ce40f98-80af-4a4b-8556-c5c7dd84fc58"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416003 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjmr8\" (UniqueName: \"kubernetes.io/projected/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-kube-api-access-rjmr8\") pod \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416024 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-config-data\") pod \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416047 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-dev\") pod \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416064 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-lib-modules\") pod \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416130 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-etc-iscsi\") pod \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416179 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-scripts\") pod \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416209 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") pod \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416224 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-httpd-run\") pod \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416248 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-var-locks-brick\") pod \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\" (UID: \"2ce40f98-80af-4a4b-8556-c5c7dd84fc58\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416296 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-logs" (OuterVolumeSpecName: "logs") pod "2ce40f98-80af-4a4b-8556-c5c7dd84fc58" (UID: "2ce40f98-80af-4a4b-8556-c5c7dd84fc58"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416503 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "2ce40f98-80af-4a4b-8556-c5c7dd84fc58" (UID: "2ce40f98-80af-4a4b-8556-c5c7dd84fc58"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416530 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-dev" (OuterVolumeSpecName: "dev") pod "2ce40f98-80af-4a4b-8556-c5c7dd84fc58" (UID: "2ce40f98-80af-4a4b-8556-c5c7dd84fc58"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416546 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416558 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416559 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "2ce40f98-80af-4a4b-8556-c5c7dd84fc58" (UID: "2ce40f98-80af-4a4b-8556-c5c7dd84fc58"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416566 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416576 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.416793 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "2ce40f98-80af-4a4b-8556-c5c7dd84fc58" (UID: "2ce40f98-80af-4a4b-8556-c5c7dd84fc58"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.417156 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "2ce40f98-80af-4a4b-8556-c5c7dd84fc58" (UID: "2ce40f98-80af-4a4b-8556-c5c7dd84fc58"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.421573 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-scripts" (OuterVolumeSpecName: "scripts") pod "2ce40f98-80af-4a4b-8556-c5c7dd84fc58" (UID: "2ce40f98-80af-4a4b-8556-c5c7dd84fc58"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.422140 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage13-crc" (OuterVolumeSpecName: "glance-cache") pod "2ce40f98-80af-4a4b-8556-c5c7dd84fc58" (UID: "2ce40f98-80af-4a4b-8556-c5c7dd84fc58"). InnerVolumeSpecName "local-storage13-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.422166 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-kube-api-access-rjmr8" (OuterVolumeSpecName: "kube-api-access-rjmr8") pod "2ce40f98-80af-4a4b-8556-c5c7dd84fc58" (UID: "2ce40f98-80af-4a4b-8556-c5c7dd84fc58"). InnerVolumeSpecName "kube-api-access-rjmr8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.424144 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage17-crc" (OuterVolumeSpecName: "glance") pod "2ce40f98-80af-4a4b-8556-c5c7dd84fc58" (UID: "2ce40f98-80af-4a4b-8556-c5c7dd84fc58"). InnerVolumeSpecName "local-storage17-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.468209 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-config-data" (OuterVolumeSpecName: "config-data") pod "2ce40f98-80af-4a4b-8556-c5c7dd84fc58" (UID: "2ce40f98-80af-4a4b-8556-c5c7dd84fc58"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.488756 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.519294 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjmr8\" (UniqueName: \"kubernetes.io/projected/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-kube-api-access-rjmr8\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.519334 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.519347 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.519359 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.519377 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.519460 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.519512 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.519548 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.519588 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/2ce40f98-80af-4a4b-8556-c5c7dd84fc58-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.519614 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.545892 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage17-crc" (UniqueName: "kubernetes.io/local-volume/local-storage17-crc") on node "crc" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.554837 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage13-crc" (UniqueName: "kubernetes.io/local-volume/local-storage13-crc") on node "crc" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.620844 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-var-locks-brick\") pod \"fada73df-4c18-4f18-9fcd-9fe24825a32c\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.620957 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "fada73df-4c18-4f18-9fcd-9fe24825a32c" (UID: "fada73df-4c18-4f18-9fcd-9fe24825a32c"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.620962 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") pod \"fada73df-4c18-4f18-9fcd-9fe24825a32c\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621026 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-run\") pod \"fada73df-4c18-4f18-9fcd-9fe24825a32c\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621049 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-dev\") pod \"fada73df-4c18-4f18-9fcd-9fe24825a32c\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621084 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fada73df-4c18-4f18-9fcd-9fe24825a32c-logs\") pod \"fada73df-4c18-4f18-9fcd-9fe24825a32c\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621113 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pmpk8\" (UniqueName: \"kubernetes.io/projected/fada73df-4c18-4f18-9fcd-9fe24825a32c-kube-api-access-pmpk8\") pod \"fada73df-4c18-4f18-9fcd-9fe24825a32c\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621133 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fada73df-4c18-4f18-9fcd-9fe24825a32c-config-data\") pod \"fada73df-4c18-4f18-9fcd-9fe24825a32c\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621156 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") pod \"fada73df-4c18-4f18-9fcd-9fe24825a32c\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621170 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-sys\") pod \"fada73df-4c18-4f18-9fcd-9fe24825a32c\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621159 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-run" (OuterVolumeSpecName: "run") pod "fada73df-4c18-4f18-9fcd-9fe24825a32c" (UID: "fada73df-4c18-4f18-9fcd-9fe24825a32c"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621202 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fada73df-4c18-4f18-9fcd-9fe24825a32c-httpd-run\") pod \"fada73df-4c18-4f18-9fcd-9fe24825a32c\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621190 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-dev" (OuterVolumeSpecName: "dev") pod "fada73df-4c18-4f18-9fcd-9fe24825a32c" (UID: "fada73df-4c18-4f18-9fcd-9fe24825a32c"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621242 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-lib-modules\") pod \"fada73df-4c18-4f18-9fcd-9fe24825a32c\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621258 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-etc-nvme\") pod \"fada73df-4c18-4f18-9fcd-9fe24825a32c\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621268 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-sys" (OuterVolumeSpecName: "sys") pod "fada73df-4c18-4f18-9fcd-9fe24825a32c" (UID: "fada73df-4c18-4f18-9fcd-9fe24825a32c"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621285 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-etc-iscsi\") pod \"fada73df-4c18-4f18-9fcd-9fe24825a32c\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621315 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fada73df-4c18-4f18-9fcd-9fe24825a32c-scripts\") pod \"fada73df-4c18-4f18-9fcd-9fe24825a32c\" (UID: \"fada73df-4c18-4f18-9fcd-9fe24825a32c\") " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621588 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621611 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621619 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621627 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage17-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage17-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621634 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage13-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage13-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621647 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621698 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fada73df-4c18-4f18-9fcd-9fe24825a32c-logs" (OuterVolumeSpecName: "logs") pod "fada73df-4c18-4f18-9fcd-9fe24825a32c" (UID: "fada73df-4c18-4f18-9fcd-9fe24825a32c"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.621756 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "fada73df-4c18-4f18-9fcd-9fe24825a32c" (UID: "fada73df-4c18-4f18-9fcd-9fe24825a32c"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.622001 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fada73df-4c18-4f18-9fcd-9fe24825a32c-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "fada73df-4c18-4f18-9fcd-9fe24825a32c" (UID: "fada73df-4c18-4f18-9fcd-9fe24825a32c"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.622000 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "fada73df-4c18-4f18-9fcd-9fe24825a32c" (UID: "fada73df-4c18-4f18-9fcd-9fe24825a32c"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.622035 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "fada73df-4c18-4f18-9fcd-9fe24825a32c" (UID: "fada73df-4c18-4f18-9fcd-9fe24825a32c"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.623867 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage16-crc" (OuterVolumeSpecName: "glance") pod "fada73df-4c18-4f18-9fcd-9fe24825a32c" (UID: "fada73df-4c18-4f18-9fcd-9fe24825a32c"). InnerVolumeSpecName "local-storage16-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.624489 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fada73df-4c18-4f18-9fcd-9fe24825a32c-scripts" (OuterVolumeSpecName: "scripts") pod "fada73df-4c18-4f18-9fcd-9fe24825a32c" (UID: "fada73df-4c18-4f18-9fcd-9fe24825a32c"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.624911 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fada73df-4c18-4f18-9fcd-9fe24825a32c-kube-api-access-pmpk8" (OuterVolumeSpecName: "kube-api-access-pmpk8") pod "fada73df-4c18-4f18-9fcd-9fe24825a32c" (UID: "fada73df-4c18-4f18-9fcd-9fe24825a32c"). InnerVolumeSpecName "kube-api-access-pmpk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.624993 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage18-crc" (OuterVolumeSpecName: "glance-cache") pod "fada73df-4c18-4f18-9fcd-9fe24825a32c" (UID: "fada73df-4c18-4f18-9fcd-9fe24825a32c"). InnerVolumeSpecName "local-storage18-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.653023 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fada73df-4c18-4f18-9fcd-9fe24825a32c-config-data" (OuterVolumeSpecName: "config-data") pod "fada73df-4c18-4f18-9fcd-9fe24825a32c" (UID: "fada73df-4c18-4f18-9fcd-9fe24825a32c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.722684 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.722724 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/fada73df-4c18-4f18-9fcd-9fe24825a32c-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.722754 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.722764 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/fada73df-4c18-4f18-9fcd-9fe24825a32c-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.722775 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pmpk8\" (UniqueName: \"kubernetes.io/projected/fada73df-4c18-4f18-9fcd-9fe24825a32c-kube-api-access-pmpk8\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.722786 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/fada73df-4c18-4f18-9fcd-9fe24825a32c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.722799 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" " Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.722808 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/fada73df-4c18-4f18-9fcd-9fe24825a32c-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.722817 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.722825 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/fada73df-4c18-4f18-9fcd-9fe24825a32c-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.734915 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage18-crc" (UniqueName: "kubernetes.io/local-volume/local-storage18-crc") on node "crc" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.734998 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage16-crc" (UniqueName: "kubernetes.io/local-volume/local-storage16-crc") on node "crc" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.824043 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage16-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage16-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.824097 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage18-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage18-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.896360 4751 generic.go:334] "Generic (PLEG): container finished" podID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerID="89a88ddaeae8a6fe7859be79e45bc66e157a0d02a03f5daf69e0ab6320bd15be" exitCode=0 Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.896442 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerDied","Data":"89a88ddaeae8a6fe7859be79e45bc66e157a0d02a03f5daf69e0ab6320bd15be"} Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.897322 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerStarted","Data":"1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130"} Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.897571 4751 scope.go:117] "RemoveContainer" containerID="7fcf941f127d31d0e5c99d5ef038c633782d289d0e911f4e9c5c6f77b2a91e2a" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.899773 4751 generic.go:334] "Generic (PLEG): container finished" podID="fada73df-4c18-4f18-9fcd-9fe24825a32c" containerID="0622c62b16c45cccd093df45438f4801ecb9dee67a405110fa2bd8152369a536" exitCode=0 Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.899862 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-1" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.899885 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"fada73df-4c18-4f18-9fcd-9fe24825a32c","Type":"ContainerDied","Data":"0622c62b16c45cccd093df45438f4801ecb9dee67a405110fa2bd8152369a536"} Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.899935 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-1" event={"ID":"fada73df-4c18-4f18-9fcd-9fe24825a32c","Type":"ContainerDied","Data":"4e85dc9929fef5538bbedbabd1bc4862934d09f3a214bd5393c16cf9dbfd21f7"} Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.902466 4751 generic.go:334] "Generic (PLEG): container finished" podID="2ce40f98-80af-4a4b-8556-c5c7dd84fc58" containerID="5b307a953bf028387f69f75e58e1545d2c2c38f81096b44abf4639237badb848" exitCode=0 Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.902543 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-1" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.902588 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2ce40f98-80af-4a4b-8556-c5c7dd84fc58","Type":"ContainerDied","Data":"5b307a953bf028387f69f75e58e1545d2c2c38f81096b44abf4639237badb848"} Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.902621 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-1" event={"ID":"2ce40f98-80af-4a4b-8556-c5c7dd84fc58","Type":"ContainerDied","Data":"c11c5ca70f63ee145b6adb16291fa14f1d7247eb5f019288ced5f1338933a04e"} Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.933470 4751 scope.go:117] "RemoveContainer" containerID="0622c62b16c45cccd093df45438f4801ecb9dee67a405110fa2bd8152369a536" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.944958 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.953035 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-1"] Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.960055 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.965846 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-1"] Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.971861 4751 scope.go:117] "RemoveContainer" containerID="4ac3b73f999bb2d4398e520fc413f3d3e17b891e2a007e2b9a8454c50571906c" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.990847 4751 scope.go:117] "RemoveContainer" containerID="0622c62b16c45cccd093df45438f4801ecb9dee67a405110fa2bd8152369a536" Jan 31 15:06:09 crc kubenswrapper[4751]: E0131 15:06:09.991333 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0622c62b16c45cccd093df45438f4801ecb9dee67a405110fa2bd8152369a536\": container with ID starting with 0622c62b16c45cccd093df45438f4801ecb9dee67a405110fa2bd8152369a536 not found: ID does not exist" containerID="0622c62b16c45cccd093df45438f4801ecb9dee67a405110fa2bd8152369a536" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.991369 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0622c62b16c45cccd093df45438f4801ecb9dee67a405110fa2bd8152369a536"} err="failed to get container status \"0622c62b16c45cccd093df45438f4801ecb9dee67a405110fa2bd8152369a536\": rpc error: code = NotFound desc = could not find container \"0622c62b16c45cccd093df45438f4801ecb9dee67a405110fa2bd8152369a536\": container with ID starting with 0622c62b16c45cccd093df45438f4801ecb9dee67a405110fa2bd8152369a536 not found: ID does not exist" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.991397 4751 scope.go:117] "RemoveContainer" containerID="4ac3b73f999bb2d4398e520fc413f3d3e17b891e2a007e2b9a8454c50571906c" Jan 31 15:06:09 crc kubenswrapper[4751]: E0131 15:06:09.991776 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ac3b73f999bb2d4398e520fc413f3d3e17b891e2a007e2b9a8454c50571906c\": container with ID starting with 4ac3b73f999bb2d4398e520fc413f3d3e17b891e2a007e2b9a8454c50571906c not found: ID does not exist" containerID="4ac3b73f999bb2d4398e520fc413f3d3e17b891e2a007e2b9a8454c50571906c" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.991891 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ac3b73f999bb2d4398e520fc413f3d3e17b891e2a007e2b9a8454c50571906c"} err="failed to get container status \"4ac3b73f999bb2d4398e520fc413f3d3e17b891e2a007e2b9a8454c50571906c\": rpc error: code = NotFound desc = could not find container \"4ac3b73f999bb2d4398e520fc413f3d3e17b891e2a007e2b9a8454c50571906c\": container with ID starting with 4ac3b73f999bb2d4398e520fc413f3d3e17b891e2a007e2b9a8454c50571906c not found: ID does not exist" Jan 31 15:06:09 crc kubenswrapper[4751]: I0131 15:06:09.991993 4751 scope.go:117] "RemoveContainer" containerID="5b307a953bf028387f69f75e58e1545d2c2c38f81096b44abf4639237badb848" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.015671 4751 scope.go:117] "RemoveContainer" containerID="e83e3eee6d1bd6eeb18c2480a68530a6737d3f76fac28ccff0e74d019e406b35" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.040217 4751 scope.go:117] "RemoveContainer" containerID="5b307a953bf028387f69f75e58e1545d2c2c38f81096b44abf4639237badb848" Jan 31 15:06:10 crc kubenswrapper[4751]: E0131 15:06:10.040766 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5b307a953bf028387f69f75e58e1545d2c2c38f81096b44abf4639237badb848\": container with ID starting with 5b307a953bf028387f69f75e58e1545d2c2c38f81096b44abf4639237badb848 not found: ID does not exist" containerID="5b307a953bf028387f69f75e58e1545d2c2c38f81096b44abf4639237badb848" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.040799 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5b307a953bf028387f69f75e58e1545d2c2c38f81096b44abf4639237badb848"} err="failed to get container status \"5b307a953bf028387f69f75e58e1545d2c2c38f81096b44abf4639237badb848\": rpc error: code = NotFound desc = could not find container \"5b307a953bf028387f69f75e58e1545d2c2c38f81096b44abf4639237badb848\": container with ID starting with 5b307a953bf028387f69f75e58e1545d2c2c38f81096b44abf4639237badb848 not found: ID does not exist" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.040827 4751 scope.go:117] "RemoveContainer" containerID="e83e3eee6d1bd6eeb18c2480a68530a6737d3f76fac28ccff0e74d019e406b35" Jan 31 15:06:10 crc kubenswrapper[4751]: E0131 15:06:10.041237 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e83e3eee6d1bd6eeb18c2480a68530a6737d3f76fac28ccff0e74d019e406b35\": container with ID starting with e83e3eee6d1bd6eeb18c2480a68530a6737d3f76fac28ccff0e74d019e406b35 not found: ID does not exist" containerID="e83e3eee6d1bd6eeb18c2480a68530a6737d3f76fac28ccff0e74d019e406b35" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.041260 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e83e3eee6d1bd6eeb18c2480a68530a6737d3f76fac28ccff0e74d019e406b35"} err="failed to get container status \"e83e3eee6d1bd6eeb18c2480a68530a6737d3f76fac28ccff0e74d019e406b35\": rpc error: code = NotFound desc = could not find container \"e83e3eee6d1bd6eeb18c2480a68530a6737d3f76fac28ccff0e74d019e406b35\": container with ID starting with e83e3eee6d1bd6eeb18c2480a68530a6737d3f76fac28ccff0e74d019e406b35 not found: ID does not exist" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.123449 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancefb75-account-delete-rcplg" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.229586 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09ca3bf6-027a-4e7b-a142-44ad4308fd3e-operator-scripts\") pod \"09ca3bf6-027a-4e7b-a142-44ad4308fd3e\" (UID: \"09ca3bf6-027a-4e7b-a142-44ad4308fd3e\") " Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.229712 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6x4fc\" (UniqueName: \"kubernetes.io/projected/09ca3bf6-027a-4e7b-a142-44ad4308fd3e-kube-api-access-6x4fc\") pod \"09ca3bf6-027a-4e7b-a142-44ad4308fd3e\" (UID: \"09ca3bf6-027a-4e7b-a142-44ad4308fd3e\") " Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.230991 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ca3bf6-027a-4e7b-a142-44ad4308fd3e-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "09ca3bf6-027a-4e7b-a142-44ad4308fd3e" (UID: "09ca3bf6-027a-4e7b-a142-44ad4308fd3e"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.234313 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ca3bf6-027a-4e7b-a142-44ad4308fd3e-kube-api-access-6x4fc" (OuterVolumeSpecName: "kube-api-access-6x4fc") pod "09ca3bf6-027a-4e7b-a142-44ad4308fd3e" (UID: "09ca3bf6-027a-4e7b-a142-44ad4308fd3e"). InnerVolumeSpecName "kube-api-access-6x4fc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.331282 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/09ca3bf6-027a-4e7b-a142-44ad4308fd3e-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.331321 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6x4fc\" (UniqueName: \"kubernetes.io/projected/09ca3bf6-027a-4e7b-a142-44ad4308fd3e-kube-api-access-6x4fc\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.418910 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2ce40f98-80af-4a4b-8556-c5c7dd84fc58" path="/var/lib/kubelet/pods/2ce40f98-80af-4a4b-8556-c5c7dd84fc58/volumes" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.419943 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fada73df-4c18-4f18-9fcd-9fe24825a32c" path="/var/lib/kubelet/pods/fada73df-4c18-4f18-9fcd-9fe24825a32c/volumes" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.858897 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.917482 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glancefb75-account-delete-rcplg" event={"ID":"09ca3bf6-027a-4e7b-a142-44ad4308fd3e","Type":"ContainerDied","Data":"f49a7bb7647ca08427d7a0e06c9d600f223665df0c37dd55d0d845016d321065"} Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.917519 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f49a7bb7647ca08427d7a0e06c9d600f223665df0c37dd55d0d845016d321065" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.917646 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glancefb75-account-delete-rcplg" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.924083 4751 generic.go:334] "Generic (PLEG): container finished" podID="90af064c-9d0a-4818-8e19-c87da44a879b" containerID="e0cc5d1490293994ec3b55b9f608e90b08473604cf0ebba39024c27ee8d6005b" exitCode=0 Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.924136 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"90af064c-9d0a-4818-8e19-c87da44a879b","Type":"ContainerDied","Data":"e0cc5d1490293994ec3b55b9f608e90b08473604cf0ebba39024c27ee8d6005b"} Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.924159 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-external-api-0" event={"ID":"90af064c-9d0a-4818-8e19-c87da44a879b","Type":"ContainerDied","Data":"e99fc8a548b6e6e8e6da564fb55696f96c325bf5ca3500bbda2b1e9e31f7bf04"} Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.924176 4751 scope.go:117] "RemoveContainer" containerID="e0cc5d1490293994ec3b55b9f608e90b08473604cf0ebba39024c27ee8d6005b" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.924260 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-external-api-0" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.929454 4751 generic.go:334] "Generic (PLEG): container finished" podID="e0e8efba-9adc-482b-bd77-553d76648ac6" containerID="90a6f0dd1552854347833452de54355b3cea39a33ffea2db5092440b501dadb7" exitCode=0 Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.929529 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"e0e8efba-9adc-482b-bd77-553d76648ac6","Type":"ContainerDied","Data":"90a6f0dd1552854347833452de54355b3cea39a33ffea2db5092440b501dadb7"} Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.950694 4751 scope.go:117] "RemoveContainer" containerID="8b807b8111d0f0f33aee37200b1958ff47b158280257c5ed2833cf9c5c3a286f" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.969957 4751 scope.go:117] "RemoveContainer" containerID="e0cc5d1490293994ec3b55b9f608e90b08473604cf0ebba39024c27ee8d6005b" Jan 31 15:06:10 crc kubenswrapper[4751]: E0131 15:06:10.970575 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e0cc5d1490293994ec3b55b9f608e90b08473604cf0ebba39024c27ee8d6005b\": container with ID starting with e0cc5d1490293994ec3b55b9f608e90b08473604cf0ebba39024c27ee8d6005b not found: ID does not exist" containerID="e0cc5d1490293994ec3b55b9f608e90b08473604cf0ebba39024c27ee8d6005b" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.970605 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e0cc5d1490293994ec3b55b9f608e90b08473604cf0ebba39024c27ee8d6005b"} err="failed to get container status \"e0cc5d1490293994ec3b55b9f608e90b08473604cf0ebba39024c27ee8d6005b\": rpc error: code = NotFound desc = could not find container \"e0cc5d1490293994ec3b55b9f608e90b08473604cf0ebba39024c27ee8d6005b\": container with ID starting with e0cc5d1490293994ec3b55b9f608e90b08473604cf0ebba39024c27ee8d6005b not found: ID does not exist" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.970630 4751 scope.go:117] "RemoveContainer" containerID="8b807b8111d0f0f33aee37200b1958ff47b158280257c5ed2833cf9c5c3a286f" Jan 31 15:06:10 crc kubenswrapper[4751]: E0131 15:06:10.971034 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8b807b8111d0f0f33aee37200b1958ff47b158280257c5ed2833cf9c5c3a286f\": container with ID starting with 8b807b8111d0f0f33aee37200b1958ff47b158280257c5ed2833cf9c5c3a286f not found: ID does not exist" containerID="8b807b8111d0f0f33aee37200b1958ff47b158280257c5ed2833cf9c5c3a286f" Jan 31 15:06:10 crc kubenswrapper[4751]: I0131 15:06:10.971054 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8b807b8111d0f0f33aee37200b1958ff47b158280257c5ed2833cf9c5c3a286f"} err="failed to get container status \"8b807b8111d0f0f33aee37200b1958ff47b158280257c5ed2833cf9c5c3a286f\": rpc error: code = NotFound desc = could not find container \"8b807b8111d0f0f33aee37200b1958ff47b158280257c5ed2833cf9c5c3a286f\": container with ID starting with 8b807b8111d0f0f33aee37200b1958ff47b158280257c5ed2833cf9c5c3a286f not found: ID does not exist" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.043616 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") pod \"90af064c-9d0a-4818-8e19-c87da44a879b\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.044485 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-etc-nvme\") pod \"90af064c-9d0a-4818-8e19-c87da44a879b\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.044856 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") pod \"90af064c-9d0a-4818-8e19-c87da44a879b\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.044933 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "90af064c-9d0a-4818-8e19-c87da44a879b" (UID: "90af064c-9d0a-4818-8e19-c87da44a879b"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.045039 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-sys\") pod \"90af064c-9d0a-4818-8e19-c87da44a879b\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.045232 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-var-locks-brick\") pod \"90af064c-9d0a-4818-8e19-c87da44a879b\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.045301 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-sys" (OuterVolumeSpecName: "sys") pod "90af064c-9d0a-4818-8e19-c87da44a879b" (UID: "90af064c-9d0a-4818-8e19-c87da44a879b"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.045425 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/90af064c-9d0a-4818-8e19-c87da44a879b-httpd-run\") pod \"90af064c-9d0a-4818-8e19-c87da44a879b\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.045620 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90af064c-9d0a-4818-8e19-c87da44a879b-scripts\") pod \"90af064c-9d0a-4818-8e19-c87da44a879b\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.045838 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-dev\") pod \"90af064c-9d0a-4818-8e19-c87da44a879b\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.045937 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwj7s\" (UniqueName: \"kubernetes.io/projected/90af064c-9d0a-4818-8e19-c87da44a879b-kube-api-access-rwj7s\") pod \"90af064c-9d0a-4818-8e19-c87da44a879b\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.046036 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-run\") pod \"90af064c-9d0a-4818-8e19-c87da44a879b\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.046543 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90af064c-9d0a-4818-8e19-c87da44a879b-logs\") pod \"90af064c-9d0a-4818-8e19-c87da44a879b\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.046678 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-lib-modules\") pod \"90af064c-9d0a-4818-8e19-c87da44a879b\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.046909 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90af064c-9d0a-4818-8e19-c87da44a879b-config-data\") pod \"90af064c-9d0a-4818-8e19-c87da44a879b\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.047022 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-etc-iscsi\") pod \"90af064c-9d0a-4818-8e19-c87da44a879b\" (UID: \"90af064c-9d0a-4818-8e19-c87da44a879b\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.045433 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "90af064c-9d0a-4818-8e19-c87da44a879b" (UID: "90af064c-9d0a-4818-8e19-c87da44a879b"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.045696 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90af064c-9d0a-4818-8e19-c87da44a879b-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "90af064c-9d0a-4818-8e19-c87da44a879b" (UID: "90af064c-9d0a-4818-8e19-c87da44a879b"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.046167 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-dev" (OuterVolumeSpecName: "dev") pod "90af064c-9d0a-4818-8e19-c87da44a879b" (UID: "90af064c-9d0a-4818-8e19-c87da44a879b"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.046463 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-run" (OuterVolumeSpecName: "run") pod "90af064c-9d0a-4818-8e19-c87da44a879b" (UID: "90af064c-9d0a-4818-8e19-c87da44a879b"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.046828 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/90af064c-9d0a-4818-8e19-c87da44a879b-logs" (OuterVolumeSpecName: "logs") pod "90af064c-9d0a-4818-8e19-c87da44a879b" (UID: "90af064c-9d0a-4818-8e19-c87da44a879b"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.046860 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "90af064c-9d0a-4818-8e19-c87da44a879b" (UID: "90af064c-9d0a-4818-8e19-c87da44a879b"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.047331 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "90af064c-9d0a-4818-8e19-c87da44a879b" (UID: "90af064c-9d0a-4818-8e19-c87da44a879b"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.049869 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.049989 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.050100 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/90af064c-9d0a-4818-8e19-c87da44a879b-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.050187 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.050270 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.050350 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.050426 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.050902 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/90af064c-9d0a-4818-8e19-c87da44a879b-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.050979 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/90af064c-9d0a-4818-8e19-c87da44a879b-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.105996 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/90af064c-9d0a-4818-8e19-c87da44a879b-kube-api-access-rwj7s" (OuterVolumeSpecName: "kube-api-access-rwj7s") pod "90af064c-9d0a-4818-8e19-c87da44a879b" (UID: "90af064c-9d0a-4818-8e19-c87da44a879b"). InnerVolumeSpecName "kube-api-access-rwj7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.106558 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90af064c-9d0a-4818-8e19-c87da44a879b-scripts" (OuterVolumeSpecName: "scripts") pod "90af064c-9d0a-4818-8e19-c87da44a879b" (UID: "90af064c-9d0a-4818-8e19-c87da44a879b"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.106614 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage02-crc" (OuterVolumeSpecName: "glance-cache") pod "90af064c-9d0a-4818-8e19-c87da44a879b" (UID: "90af064c-9d0a-4818-8e19-c87da44a879b"). InnerVolumeSpecName "local-storage02-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.107188 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage05-crc" (OuterVolumeSpecName: "glance") pod "90af064c-9d0a-4818-8e19-c87da44a879b" (UID: "90af064c-9d0a-4818-8e19-c87da44a879b"). InnerVolumeSpecName "local-storage05-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.143445 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.144217 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/90af064c-9d0a-4818-8e19-c87da44a879b-config-data" (OuterVolumeSpecName: "config-data") pod "90af064c-9d0a-4818-8e19-c87da44a879b" (UID: "90af064c-9d0a-4818-8e19-c87da44a879b"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.152206 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.152444 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.152510 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/90af064c-9d0a-4818-8e19-c87da44a879b-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.152605 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwj7s\" (UniqueName: \"kubernetes.io/projected/90af064c-9d0a-4818-8e19-c87da44a879b-kube-api-access-rwj7s\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.152664 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/90af064c-9d0a-4818-8e19-c87da44a879b-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.178059 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage02-crc" (UniqueName: "kubernetes.io/local-volume/local-storage02-crc") on node "crc" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.178400 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage05-crc" (UniqueName: "kubernetes.io/local-volume/local-storage05-crc") on node "crc" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.254820 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-etc-iscsi\") pod \"e0e8efba-9adc-482b-bd77-553d76648ac6\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.254900 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0e8efba-9adc-482b-bd77-553d76648ac6-httpd-run\") pod \"e0e8efba-9adc-482b-bd77-553d76648ac6\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.254919 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance-cache\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") pod \"e0e8efba-9adc-482b-bd77-553d76648ac6\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.254945 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-run\") pod \"e0e8efba-9adc-482b-bd77-553d76648ac6\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.255027 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-sys\") pod \"e0e8efba-9adc-482b-bd77-553d76648ac6\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.255085 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-lib-modules\") pod \"e0e8efba-9adc-482b-bd77-553d76648ac6\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.255134 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-var-locks-brick\") pod \"e0e8efba-9adc-482b-bd77-553d76648ac6\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.255207 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"glance\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") pod \"e0e8efba-9adc-482b-bd77-553d76648ac6\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.255268 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-etc-nvme\") pod \"e0e8efba-9adc-482b-bd77-553d76648ac6\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.255331 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0e8efba-9adc-482b-bd77-553d76648ac6-logs\") pod \"e0e8efba-9adc-482b-bd77-553d76648ac6\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.255364 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0e8efba-9adc-482b-bd77-553d76648ac6-scripts\") pod \"e0e8efba-9adc-482b-bd77-553d76648ac6\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.255419 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9kwf\" (UniqueName: \"kubernetes.io/projected/e0e8efba-9adc-482b-bd77-553d76648ac6-kube-api-access-l9kwf\") pod \"e0e8efba-9adc-482b-bd77-553d76648ac6\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.255452 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-dev\") pod \"e0e8efba-9adc-482b-bd77-553d76648ac6\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.255500 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0e8efba-9adc-482b-bd77-553d76648ac6-config-data\") pod \"e0e8efba-9adc-482b-bd77-553d76648ac6\" (UID: \"e0e8efba-9adc-482b-bd77-553d76648ac6\") " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.255908 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage05-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage05-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.255928 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage02-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage02-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.263222 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-etc-iscsi" (OuterVolumeSpecName: "etc-iscsi") pod "e0e8efba-9adc-482b-bd77-553d76648ac6" (UID: "e0e8efba-9adc-482b-bd77-553d76648ac6"). InnerVolumeSpecName "etc-iscsi". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.263540 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0e8efba-9adc-482b-bd77-553d76648ac6-httpd-run" (OuterVolumeSpecName: "httpd-run") pod "e0e8efba-9adc-482b-bd77-553d76648ac6" (UID: "e0e8efba-9adc-482b-bd77-553d76648ac6"). InnerVolumeSpecName "httpd-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.266624 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-etc-nvme" (OuterVolumeSpecName: "etc-nvme") pod "e0e8efba-9adc-482b-bd77-553d76648ac6" (UID: "e0e8efba-9adc-482b-bd77-553d76648ac6"). InnerVolumeSpecName "etc-nvme". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.266892 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e0e8efba-9adc-482b-bd77-553d76648ac6-logs" (OuterVolumeSpecName: "logs") pod "e0e8efba-9adc-482b-bd77-553d76648ac6" (UID: "e0e8efba-9adc-482b-bd77-553d76648ac6"). InnerVolumeSpecName "logs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.266920 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-sys" (OuterVolumeSpecName: "sys") pod "e0e8efba-9adc-482b-bd77-553d76648ac6" (UID: "e0e8efba-9adc-482b-bd77-553d76648ac6"). InnerVolumeSpecName "sys". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.266938 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-run" (OuterVolumeSpecName: "run") pod "e0e8efba-9adc-482b-bd77-553d76648ac6" (UID: "e0e8efba-9adc-482b-bd77-553d76648ac6"). InnerVolumeSpecName "run". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.266954 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-dev" (OuterVolumeSpecName: "dev") pod "e0e8efba-9adc-482b-bd77-553d76648ac6" (UID: "e0e8efba-9adc-482b-bd77-553d76648ac6"). InnerVolumeSpecName "dev". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.266971 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-lib-modules" (OuterVolumeSpecName: "lib-modules") pod "e0e8efba-9adc-482b-bd77-553d76648ac6" (UID: "e0e8efba-9adc-482b-bd77-553d76648ac6"). InnerVolumeSpecName "lib-modules". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.266990 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-var-locks-brick" (OuterVolumeSpecName: "var-locks-brick") pod "e0e8efba-9adc-482b-bd77-553d76648ac6" (UID: "e0e8efba-9adc-482b-bd77-553d76648ac6"). InnerVolumeSpecName "var-locks-brick". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.295290 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e0e8efba-9adc-482b-bd77-553d76648ac6-kube-api-access-l9kwf" (OuterVolumeSpecName: "kube-api-access-l9kwf") pod "e0e8efba-9adc-482b-bd77-553d76648ac6" (UID: "e0e8efba-9adc-482b-bd77-553d76648ac6"). InnerVolumeSpecName "kube-api-access-l9kwf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.295423 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e8efba-9adc-482b-bd77-553d76648ac6-scripts" (OuterVolumeSpecName: "scripts") pod "e0e8efba-9adc-482b-bd77-553d76648ac6" (UID: "e0e8efba-9adc-482b-bd77-553d76648ac6"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.296669 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage08-crc" (OuterVolumeSpecName: "glance-cache") pod "e0e8efba-9adc-482b-bd77-553d76648ac6" (UID: "e0e8efba-9adc-482b-bd77-553d76648ac6"). InnerVolumeSpecName "local-storage08-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.296776 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage15-crc" (OuterVolumeSpecName: "glance") pod "e0e8efba-9adc-482b-bd77-553d76648ac6" (UID: "e0e8efba-9adc-482b-bd77-553d76648ac6"). InnerVolumeSpecName "local-storage15-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.307183 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.319159 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-external-api-0"] Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.321560 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e0e8efba-9adc-482b-bd77-553d76648ac6-config-data" (OuterVolumeSpecName: "config-data") pod "e0e8efba-9adc-482b-bd77-553d76648ac6" (UID: "e0e8efba-9adc-482b-bd77-553d76648ac6"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.356986 4751 reconciler_common.go:293] "Volume detached for volume \"dev\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-dev\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.357014 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/e0e8efba-9adc-482b-bd77-553d76648ac6-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.357024 4751 reconciler_common.go:293] "Volume detached for volume \"etc-iscsi\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-etc-iscsi\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.357032 4751 reconciler_common.go:293] "Volume detached for volume \"httpd-run\" (UniqueName: \"kubernetes.io/empty-dir/e0e8efba-9adc-482b-bd77-553d76648ac6-httpd-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.357064 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.357090 4751 reconciler_common.go:293] "Volume detached for volume \"run\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-run\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.357098 4751 reconciler_common.go:293] "Volume detached for volume \"sys\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-sys\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.357106 4751 reconciler_common.go:293] "Volume detached for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-lib-modules\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.357113 4751 reconciler_common.go:293] "Volume detached for volume \"var-locks-brick\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-var-locks-brick\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.357126 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" " Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.357134 4751 reconciler_common.go:293] "Volume detached for volume \"etc-nvme\" (UniqueName: \"kubernetes.io/host-path/e0e8efba-9adc-482b-bd77-553d76648ac6-etc-nvme\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.357142 4751 reconciler_common.go:293] "Volume detached for volume \"logs\" (UniqueName: \"kubernetes.io/empty-dir/e0e8efba-9adc-482b-bd77-553d76648ac6-logs\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.357149 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/e0e8efba-9adc-482b-bd77-553d76648ac6-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.357161 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l9kwf\" (UniqueName: \"kubernetes.io/projected/e0e8efba-9adc-482b-bd77-553d76648ac6-kube-api-access-l9kwf\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.369723 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage15-crc" (UniqueName: "kubernetes.io/local-volume/local-storage15-crc") on node "crc" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.370945 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage08-crc" (UniqueName: "kubernetes.io/local-volume/local-storage08-crc") on node "crc" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.458635 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage15-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage15-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.458668 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage08-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage08-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.943720 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/glance-default-internal-api-0" event={"ID":"e0e8efba-9adc-482b-bd77-553d76648ac6","Type":"ContainerDied","Data":"8b49f74873882c94e30e439215c7b1269126be109dcab9f528966ad2a1118a0c"} Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.943771 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/glance-default-internal-api-0" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.943801 4751 scope.go:117] "RemoveContainer" containerID="90a6f0dd1552854347833452de54355b3cea39a33ffea2db5092440b501dadb7" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.980924 4751 scope.go:117] "RemoveContainer" containerID="de4d5808fc4ba7308f826962b8650c6ce882dcfa84a2b9961ed782c3d596f76e" Jan 31 15:06:11 crc kubenswrapper[4751]: I0131 15:06:11.992061 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:06:12 crc kubenswrapper[4751]: I0131 15:06:12.000015 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-default-internal-api-0"] Jan 31 15:06:12 crc kubenswrapper[4751]: I0131 15:06:12.190743 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-db-create-bbbff"] Jan 31 15:06:12 crc kubenswrapper[4751]: I0131 15:06:12.205653 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-db-create-bbbff"] Jan 31 15:06:12 crc kubenswrapper[4751]: I0131 15:06:12.211568 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glancefb75-account-delete-rcplg"] Jan 31 15:06:12 crc kubenswrapper[4751]: I0131 15:06:12.219762 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glancefb75-account-delete-rcplg"] Jan 31 15:06:12 crc kubenswrapper[4751]: I0131 15:06:12.226049 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/glance-fb75-account-create-update-8nfdw"] Jan 31 15:06:12 crc kubenswrapper[4751]: I0131 15:06:12.232978 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/glance-fb75-account-create-update-8nfdw"] Jan 31 15:06:12 crc kubenswrapper[4751]: I0131 15:06:12.417065 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ca3bf6-027a-4e7b-a142-44ad4308fd3e" path="/var/lib/kubelet/pods/09ca3bf6-027a-4e7b-a142-44ad4308fd3e/volumes" Jan 31 15:06:12 crc kubenswrapper[4751]: I0131 15:06:12.417811 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2d2dc104-ad94-47b2-add7-9314eb88e5b0" path="/var/lib/kubelet/pods/2d2dc104-ad94-47b2-add7-9314eb88e5b0/volumes" Jan 31 15:06:12 crc kubenswrapper[4751]: I0131 15:06:12.418545 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="88adbd16-7694-4f3b-8de1-b15932042491" path="/var/lib/kubelet/pods/88adbd16-7694-4f3b-8de1-b15932042491/volumes" Jan 31 15:06:12 crc kubenswrapper[4751]: I0131 15:06:12.420157 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="90af064c-9d0a-4818-8e19-c87da44a879b" path="/var/lib/kubelet/pods/90af064c-9d0a-4818-8e19-c87da44a879b/volumes" Jan 31 15:06:12 crc kubenswrapper[4751]: I0131 15:06:12.422386 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e0e8efba-9adc-482b-bd77-553d76648ac6" path="/var/lib/kubelet/pods/e0e8efba-9adc-482b-bd77-553d76648ac6/volumes" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.708573 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-ztzpn"] Jan 31 15:06:14 crc kubenswrapper[4751]: E0131 15:06:14.710807 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fada73df-4c18-4f18-9fcd-9fe24825a32c" containerName="glance-httpd" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.710986 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="fada73df-4c18-4f18-9fcd-9fe24825a32c" containerName="glance-httpd" Jan 31 15:06:14 crc kubenswrapper[4751]: E0131 15:06:14.711165 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="09ca3bf6-027a-4e7b-a142-44ad4308fd3e" containerName="mariadb-account-delete" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.711289 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="09ca3bf6-027a-4e7b-a142-44ad4308fd3e" containerName="mariadb-account-delete" Jan 31 15:06:14 crc kubenswrapper[4751]: E0131 15:06:14.711436 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fada73df-4c18-4f18-9fcd-9fe24825a32c" containerName="glance-log" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.711571 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="fada73df-4c18-4f18-9fcd-9fe24825a32c" containerName="glance-log" Jan 31 15:06:14 crc kubenswrapper[4751]: E0131 15:06:14.711708 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e8efba-9adc-482b-bd77-553d76648ac6" containerName="glance-httpd" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.711833 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e8efba-9adc-482b-bd77-553d76648ac6" containerName="glance-httpd" Jan 31 15:06:14 crc kubenswrapper[4751]: E0131 15:06:14.711965 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce40f98-80af-4a4b-8556-c5c7dd84fc58" containerName="glance-log" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.712118 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce40f98-80af-4a4b-8556-c5c7dd84fc58" containerName="glance-log" Jan 31 15:06:14 crc kubenswrapper[4751]: E0131 15:06:14.712260 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90af064c-9d0a-4818-8e19-c87da44a879b" containerName="glance-log" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.712393 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="90af064c-9d0a-4818-8e19-c87da44a879b" containerName="glance-log" Jan 31 15:06:14 crc kubenswrapper[4751]: E0131 15:06:14.712524 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="90af064c-9d0a-4818-8e19-c87da44a879b" containerName="glance-httpd" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.712635 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="90af064c-9d0a-4818-8e19-c87da44a879b" containerName="glance-httpd" Jan 31 15:06:14 crc kubenswrapper[4751]: E0131 15:06:14.712776 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2ce40f98-80af-4a4b-8556-c5c7dd84fc58" containerName="glance-httpd" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.712899 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="2ce40f98-80af-4a4b-8556-c5c7dd84fc58" containerName="glance-httpd" Jan 31 15:06:14 crc kubenswrapper[4751]: E0131 15:06:14.713039 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e0e8efba-9adc-482b-bd77-553d76648ac6" containerName="glance-log" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.713227 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e0e8efba-9adc-482b-bd77-553d76648ac6" containerName="glance-log" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.713617 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e8efba-9adc-482b-bd77-553d76648ac6" containerName="glance-log" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.713770 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce40f98-80af-4a4b-8556-c5c7dd84fc58" containerName="glance-httpd" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.713937 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="09ca3bf6-027a-4e7b-a142-44ad4308fd3e" containerName="mariadb-account-delete" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.714103 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="2ce40f98-80af-4a4b-8556-c5c7dd84fc58" containerName="glance-log" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.714248 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="90af064c-9d0a-4818-8e19-c87da44a879b" containerName="glance-httpd" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.714397 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="90af064c-9d0a-4818-8e19-c87da44a879b" containerName="glance-log" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.714532 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="fada73df-4c18-4f18-9fcd-9fe24825a32c" containerName="glance-httpd" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.714658 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="fada73df-4c18-4f18-9fcd-9fe24825a32c" containerName="glance-log" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.714788 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e0e8efba-9adc-482b-bd77-553d76648ac6" containerName="glance-httpd" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.716641 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.737124 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ztzpn"] Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.821554 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgqfs\" (UniqueName: \"kubernetes.io/projected/d783dd01-73a7-4362-888a-ab84bc8739df-kube-api-access-sgqfs\") pod \"redhat-operators-ztzpn\" (UID: \"d783dd01-73a7-4362-888a-ab84bc8739df\") " pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.821707 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d783dd01-73a7-4362-888a-ab84bc8739df-catalog-content\") pod \"redhat-operators-ztzpn\" (UID: \"d783dd01-73a7-4362-888a-ab84bc8739df\") " pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.821784 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d783dd01-73a7-4362-888a-ab84bc8739df-utilities\") pod \"redhat-operators-ztzpn\" (UID: \"d783dd01-73a7-4362-888a-ab84bc8739df\") " pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.923713 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgqfs\" (UniqueName: \"kubernetes.io/projected/d783dd01-73a7-4362-888a-ab84bc8739df-kube-api-access-sgqfs\") pod \"redhat-operators-ztzpn\" (UID: \"d783dd01-73a7-4362-888a-ab84bc8739df\") " pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.923784 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d783dd01-73a7-4362-888a-ab84bc8739df-catalog-content\") pod \"redhat-operators-ztzpn\" (UID: \"d783dd01-73a7-4362-888a-ab84bc8739df\") " pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.923831 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d783dd01-73a7-4362-888a-ab84bc8739df-utilities\") pod \"redhat-operators-ztzpn\" (UID: \"d783dd01-73a7-4362-888a-ab84bc8739df\") " pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.924313 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d783dd01-73a7-4362-888a-ab84bc8739df-catalog-content\") pod \"redhat-operators-ztzpn\" (UID: \"d783dd01-73a7-4362-888a-ab84bc8739df\") " pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.924401 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d783dd01-73a7-4362-888a-ab84bc8739df-utilities\") pod \"redhat-operators-ztzpn\" (UID: \"d783dd01-73a7-4362-888a-ab84bc8739df\") " pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:14 crc kubenswrapper[4751]: I0131 15:06:14.941775 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sgqfs\" (UniqueName: \"kubernetes.io/projected/d783dd01-73a7-4362-888a-ab84bc8739df-kube-api-access-sgqfs\") pod \"redhat-operators-ztzpn\" (UID: \"d783dd01-73a7-4362-888a-ab84bc8739df\") " pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:15 crc kubenswrapper[4751]: I0131 15:06:15.051107 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:15 crc kubenswrapper[4751]: I0131 15:06:15.506911 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-ztzpn"] Jan 31 15:06:15 crc kubenswrapper[4751]: I0131 15:06:15.984407 4751 generic.go:334] "Generic (PLEG): container finished" podID="d783dd01-73a7-4362-888a-ab84bc8739df" containerID="568906c2cc7feff3ba674be852dca9f1ba04b313f69bf113705a16e3309aa4da" exitCode=0 Jan 31 15:06:15 crc kubenswrapper[4751]: I0131 15:06:15.984454 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztzpn" event={"ID":"d783dd01-73a7-4362-888a-ab84bc8739df","Type":"ContainerDied","Data":"568906c2cc7feff3ba674be852dca9f1ba04b313f69bf113705a16e3309aa4da"} Jan 31 15:06:15 crc kubenswrapper[4751]: I0131 15:06:15.984482 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztzpn" event={"ID":"d783dd01-73a7-4362-888a-ab84bc8739df","Type":"ContainerStarted","Data":"f8e2ea7f77972f236bec476d7b7bb124f32cd9d091fcbabec970fc3dd4a6de6c"} Jan 31 15:06:15 crc kubenswrapper[4751]: I0131 15:06:15.987058 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:06:16 crc kubenswrapper[4751]: I0131 15:06:16.992947 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztzpn" event={"ID":"d783dd01-73a7-4362-888a-ab84bc8739df","Type":"ContainerStarted","Data":"c9f9f3a04268cfbac7a889faf5708fdd7ab535489380c76f269ae48567d562f0"} Jan 31 15:06:18 crc kubenswrapper[4751]: I0131 15:06:18.002372 4751 generic.go:334] "Generic (PLEG): container finished" podID="d783dd01-73a7-4362-888a-ab84bc8739df" containerID="c9f9f3a04268cfbac7a889faf5708fdd7ab535489380c76f269ae48567d562f0" exitCode=0 Jan 31 15:06:18 crc kubenswrapper[4751]: I0131 15:06:18.002425 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztzpn" event={"ID":"d783dd01-73a7-4362-888a-ab84bc8739df","Type":"ContainerDied","Data":"c9f9f3a04268cfbac7a889faf5708fdd7ab535489380c76f269ae48567d562f0"} Jan 31 15:06:19 crc kubenswrapper[4751]: I0131 15:06:19.032286 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztzpn" event={"ID":"d783dd01-73a7-4362-888a-ab84bc8739df","Type":"ContainerStarted","Data":"8a8f6ec3fc4799718a2c776fd8b2c60694522c37afe696834e35482b1037e761"} Jan 31 15:06:19 crc kubenswrapper[4751]: I0131 15:06:19.060056 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-ztzpn" podStartSLOduration=2.6316717240000003 podStartE2EDuration="5.060034629s" podCreationTimestamp="2026-01-31 15:06:14 +0000 UTC" firstStartedPulling="2026-01-31 15:06:15.986825606 +0000 UTC m=+1480.361538491" lastFinishedPulling="2026-01-31 15:06:18.415188461 +0000 UTC m=+1482.789901396" observedRunningTime="2026-01-31 15:06:19.056544877 +0000 UTC m=+1483.431257772" watchObservedRunningTime="2026-01-31 15:06:19.060034629 +0000 UTC m=+1483.434747524" Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.289592 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-z72xp"] Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.297812 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/swift-ring-rebalance-z72xp"] Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.317693 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.318148 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-server" containerID="cri-o://d93f0c8cc4f4e310c9d207351f924f281c14e44b511b3d4a8f51fed27dbeed8f" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.318471 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="swift-recon-cron" containerID="cri-o://ae11b6c0a7f7893c0ba728593c9e1b6db0bc399ae9c55df1f1023d422fc9333c" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.318515 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="rsync" containerID="cri-o://519bd8155f30918b172e24832e84310378bd7ea10e796377a992dd3fe9e7276d" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.318547 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-expirer" containerID="cri-o://950232b5b660c70b9100e81003ff993443f745f40d7da6ba8dc037822059cb8e" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.318576 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-updater" containerID="cri-o://71ca1416bdc095b268ec385a4ebcd269b729c80c3aee7f832db2892f4fe6e78a" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.318605 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-auditor" containerID="cri-o://1e2003fe4d2366b583ebedf393e2492c910be0ebf3f2652f5a15b1e8c78961df" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.318635 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-replicator" containerID="cri-o://03b25054db738f38056ec8af2822c9203e252f1a4f95be8c4ab8c1c34de3455c" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.318664 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-server" containerID="cri-o://1f74cf8c2ce97cd17f509447e4c986197d8af0e8b2f40e7c6a07653c81e66d3b" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.318692 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-updater" containerID="cri-o://400722d3dac6cd5b0b727b3e599b127bb527981160049f2561a32e7ada14affd" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.318721 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-auditor" containerID="cri-o://34a87b0cfca857f6a2c07d4713531103b7df75f0fdc3e2be299ecaf554d5d9db" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.318747 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-replicator" containerID="cri-o://03c86cbbc819872662746f2a8384c7c50f07b481c42b5f3d39e0b1e87c7b0557" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.318776 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-server" containerID="cri-o://3b4375e902d16ea731761694aa85354dcfcda568f68f1d4210b06b07c701f380" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.318804 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-reaper" containerID="cri-o://a4e14596c5c3a7af2ea9e82736c916fc73b8fcbf27a523b8fe47f9a8e69b1bc2" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.318832 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-auditor" containerID="cri-o://461a1aaa8bc72705195647c97b28e111484e900c69e9a4da07e510a6c451ed4c" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.318858 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-storage-0" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-replicator" containerID="cri-o://d0ab6cd06ea2abbd171a5345dc579495df175d9d8a52b30a0139e24e65e43616" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.360042 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-proxy-6d699db77c-58vrl"] Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.360299 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" podUID="26ee66f9-5607-4559-9a64-6767dfbcc078" containerName="proxy-httpd" containerID="cri-o://7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.360427 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" podUID="26ee66f9-5607-4559-9a64-6767dfbcc078" containerName="proxy-server" containerID="cri-o://67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be" gracePeriod=30 Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.415173 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="606aa4a9-2afe-4f51-a562-90f716040b58" path="/var/lib/kubelet/pods/606aa4a9-2afe-4f51-a562-90f716040b58/volumes" Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.706331 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" podUID="26ee66f9-5607-4559-9a64-6767dfbcc078" containerName="proxy-server" probeResult="failure" output="Get \"http://10.217.0.91:8080/healthcheck\": dial tcp 10.217.0.91:8080: connect: connection refused" Jan 31 15:06:20 crc kubenswrapper[4751]: I0131 15:06:20.706565 4751 prober.go:107] "Probe failed" probeType="Readiness" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" podUID="26ee66f9-5607-4559-9a64-6767dfbcc078" containerName="proxy-httpd" probeResult="failure" output="Get \"http://10.217.0.91:8080/healthcheck\": dial tcp 10.217.0.91:8080: connect: connection refused" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.003301 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054457 4751 generic.go:334] "Generic (PLEG): container finished" podID="440e5809-7b49-4b21-99dd-668468c84017" containerID="519bd8155f30918b172e24832e84310378bd7ea10e796377a992dd3fe9e7276d" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054488 4751 generic.go:334] "Generic (PLEG): container finished" podID="440e5809-7b49-4b21-99dd-668468c84017" containerID="950232b5b660c70b9100e81003ff993443f745f40d7da6ba8dc037822059cb8e" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054497 4751 generic.go:334] "Generic (PLEG): container finished" podID="440e5809-7b49-4b21-99dd-668468c84017" containerID="71ca1416bdc095b268ec385a4ebcd269b729c80c3aee7f832db2892f4fe6e78a" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054503 4751 generic.go:334] "Generic (PLEG): container finished" podID="440e5809-7b49-4b21-99dd-668468c84017" containerID="1e2003fe4d2366b583ebedf393e2492c910be0ebf3f2652f5a15b1e8c78961df" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054509 4751 generic.go:334] "Generic (PLEG): container finished" podID="440e5809-7b49-4b21-99dd-668468c84017" containerID="03b25054db738f38056ec8af2822c9203e252f1a4f95be8c4ab8c1c34de3455c" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054516 4751 generic.go:334] "Generic (PLEG): container finished" podID="440e5809-7b49-4b21-99dd-668468c84017" containerID="1f74cf8c2ce97cd17f509447e4c986197d8af0e8b2f40e7c6a07653c81e66d3b" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054523 4751 generic.go:334] "Generic (PLEG): container finished" podID="440e5809-7b49-4b21-99dd-668468c84017" containerID="400722d3dac6cd5b0b727b3e599b127bb527981160049f2561a32e7ada14affd" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054530 4751 generic.go:334] "Generic (PLEG): container finished" podID="440e5809-7b49-4b21-99dd-668468c84017" containerID="34a87b0cfca857f6a2c07d4713531103b7df75f0fdc3e2be299ecaf554d5d9db" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054536 4751 generic.go:334] "Generic (PLEG): container finished" podID="440e5809-7b49-4b21-99dd-668468c84017" containerID="03c86cbbc819872662746f2a8384c7c50f07b481c42b5f3d39e0b1e87c7b0557" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054542 4751 generic.go:334] "Generic (PLEG): container finished" podID="440e5809-7b49-4b21-99dd-668468c84017" containerID="3b4375e902d16ea731761694aa85354dcfcda568f68f1d4210b06b07c701f380" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054549 4751 generic.go:334] "Generic (PLEG): container finished" podID="440e5809-7b49-4b21-99dd-668468c84017" containerID="a4e14596c5c3a7af2ea9e82736c916fc73b8fcbf27a523b8fe47f9a8e69b1bc2" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054555 4751 generic.go:334] "Generic (PLEG): container finished" podID="440e5809-7b49-4b21-99dd-668468c84017" containerID="461a1aaa8bc72705195647c97b28e111484e900c69e9a4da07e510a6c451ed4c" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054560 4751 generic.go:334] "Generic (PLEG): container finished" podID="440e5809-7b49-4b21-99dd-668468c84017" containerID="d0ab6cd06ea2abbd171a5345dc579495df175d9d8a52b30a0139e24e65e43616" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054567 4751 generic.go:334] "Generic (PLEG): container finished" podID="440e5809-7b49-4b21-99dd-668468c84017" containerID="d93f0c8cc4f4e310c9d207351f924f281c14e44b511b3d4a8f51fed27dbeed8f" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054605 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"519bd8155f30918b172e24832e84310378bd7ea10e796377a992dd3fe9e7276d"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054630 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"950232b5b660c70b9100e81003ff993443f745f40d7da6ba8dc037822059cb8e"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054640 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"71ca1416bdc095b268ec385a4ebcd269b729c80c3aee7f832db2892f4fe6e78a"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054650 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"1e2003fe4d2366b583ebedf393e2492c910be0ebf3f2652f5a15b1e8c78961df"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054659 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"03b25054db738f38056ec8af2822c9203e252f1a4f95be8c4ab8c1c34de3455c"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054669 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"1f74cf8c2ce97cd17f509447e4c986197d8af0e8b2f40e7c6a07653c81e66d3b"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054677 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"400722d3dac6cd5b0b727b3e599b127bb527981160049f2561a32e7ada14affd"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054685 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"34a87b0cfca857f6a2c07d4713531103b7df75f0fdc3e2be299ecaf554d5d9db"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054694 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"03c86cbbc819872662746f2a8384c7c50f07b481c42b5f3d39e0b1e87c7b0557"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054702 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"3b4375e902d16ea731761694aa85354dcfcda568f68f1d4210b06b07c701f380"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054712 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"a4e14596c5c3a7af2ea9e82736c916fc73b8fcbf27a523b8fe47f9a8e69b1bc2"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054723 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"461a1aaa8bc72705195647c97b28e111484e900c69e9a4da07e510a6c451ed4c"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054735 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"d0ab6cd06ea2abbd171a5345dc579495df175d9d8a52b30a0139e24e65e43616"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.054745 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"d93f0c8cc4f4e310c9d207351f924f281c14e44b511b3d4a8f51fed27dbeed8f"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.057043 4751 generic.go:334] "Generic (PLEG): container finished" podID="26ee66f9-5607-4559-9a64-6767dfbcc078" containerID="67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.057063 4751 generic.go:334] "Generic (PLEG): container finished" podID="26ee66f9-5607-4559-9a64-6767dfbcc078" containerID="7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321" exitCode=0 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.057134 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" event={"ID":"26ee66f9-5607-4559-9a64-6767dfbcc078","Type":"ContainerDied","Data":"67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.057215 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" event={"ID":"26ee66f9-5607-4559-9a64-6767dfbcc078","Type":"ContainerDied","Data":"7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.057231 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" event={"ID":"26ee66f9-5607-4559-9a64-6767dfbcc078","Type":"ContainerDied","Data":"85b271a6f57bacb15bd471b08b0e25366c5d1865f74c103bc014e71042620a53"} Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.057282 4751 scope.go:117] "RemoveContainer" containerID="67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.057156 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-proxy-6d699db77c-58vrl" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.078579 4751 scope.go:117] "RemoveContainer" containerID="7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.097375 4751 scope.go:117] "RemoveContainer" containerID="67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be" Jan 31 15:06:21 crc kubenswrapper[4751]: E0131 15:06:21.097868 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be\": container with ID starting with 67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be not found: ID does not exist" containerID="67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.097915 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be"} err="failed to get container status \"67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be\": rpc error: code = NotFound desc = could not find container \"67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be\": container with ID starting with 67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be not found: ID does not exist" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.097945 4751 scope.go:117] "RemoveContainer" containerID="7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321" Jan 31 15:06:21 crc kubenswrapper[4751]: E0131 15:06:21.098377 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321\": container with ID starting with 7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321 not found: ID does not exist" containerID="7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.098402 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321"} err="failed to get container status \"7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321\": rpc error: code = NotFound desc = could not find container \"7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321\": container with ID starting with 7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321 not found: ID does not exist" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.098420 4751 scope.go:117] "RemoveContainer" containerID="67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.098782 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be"} err="failed to get container status \"67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be\": rpc error: code = NotFound desc = could not find container \"67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be\": container with ID starting with 67b86444b56a0b82ee27cdb476ad3cf81bbe2a2988cb8d86234cd5cb875fb2be not found: ID does not exist" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.098803 4751 scope.go:117] "RemoveContainer" containerID="7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.099231 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321"} err="failed to get container status \"7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321\": rpc error: code = NotFound desc = could not find container \"7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321\": container with ID starting with 7994c2196fb62df3ba578a245a33690acd7fb1518638072c0dcea5a66bf4d321 not found: ID does not exist" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.121985 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ee66f9-5607-4559-9a64-6767dfbcc078-run-httpd\") pod \"26ee66f9-5607-4559-9a64-6767dfbcc078\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.122028 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ee66f9-5607-4559-9a64-6767dfbcc078-config-data\") pod \"26ee66f9-5607-4559-9a64-6767dfbcc078\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.122092 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ee66f9-5607-4559-9a64-6767dfbcc078-log-httpd\") pod \"26ee66f9-5607-4559-9a64-6767dfbcc078\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.122122 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift\") pod \"26ee66f9-5607-4559-9a64-6767dfbcc078\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.122245 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6xgx9\" (UniqueName: \"kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-kube-api-access-6xgx9\") pod \"26ee66f9-5607-4559-9a64-6767dfbcc078\" (UID: \"26ee66f9-5607-4559-9a64-6767dfbcc078\") " Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.122522 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ee66f9-5607-4559-9a64-6767dfbcc078-run-httpd" (OuterVolumeSpecName: "run-httpd") pod "26ee66f9-5607-4559-9a64-6767dfbcc078" (UID: "26ee66f9-5607-4559-9a64-6767dfbcc078"). InnerVolumeSpecName "run-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.122658 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/26ee66f9-5607-4559-9a64-6767dfbcc078-log-httpd" (OuterVolumeSpecName: "log-httpd") pod "26ee66f9-5607-4559-9a64-6767dfbcc078" (UID: "26ee66f9-5607-4559-9a64-6767dfbcc078"). InnerVolumeSpecName "log-httpd". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.126831 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-kube-api-access-6xgx9" (OuterVolumeSpecName: "kube-api-access-6xgx9") pod "26ee66f9-5607-4559-9a64-6767dfbcc078" (UID: "26ee66f9-5607-4559-9a64-6767dfbcc078"). InnerVolumeSpecName "kube-api-access-6xgx9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.127006 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "26ee66f9-5607-4559-9a64-6767dfbcc078" (UID: "26ee66f9-5607-4559-9a64-6767dfbcc078"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.154350 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/26ee66f9-5607-4559-9a64-6767dfbcc078-config-data" (OuterVolumeSpecName: "config-data") pod "26ee66f9-5607-4559-9a64-6767dfbcc078" (UID: "26ee66f9-5607-4559-9a64-6767dfbcc078"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.224391 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6xgx9\" (UniqueName: \"kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-kube-api-access-6xgx9\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.224421 4751 reconciler_common.go:293] "Volume detached for volume \"run-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ee66f9-5607-4559-9a64-6767dfbcc078-run-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.224431 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/26ee66f9-5607-4559-9a64-6767dfbcc078-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.224438 4751 reconciler_common.go:293] "Volume detached for volume \"log-httpd\" (UniqueName: \"kubernetes.io/empty-dir/26ee66f9-5607-4559-9a64-6767dfbcc078-log-httpd\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.224446 4751 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/26ee66f9-5607-4559-9a64-6767dfbcc078-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.388849 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-proxy-6d699db77c-58vrl"] Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.396000 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/swift-proxy-6d699db77c-58vrl"] Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.451942 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-56bwv"] Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.457216 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-db-sync-56bwv"] Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.470853 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-hnxnd"] Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.494699 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-bootstrap-hnxnd"] Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.508628 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-859d455469-zqqzw"] Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.508893 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/keystone-859d455469-zqqzw" podUID="dabb55da-08db-4d2a-8b2d-ac7b2b657053" containerName="keystone-api" containerID="cri-o://6d283eaf7e9a4eadb7f123ebbd0723c09363494de09f8fb76c6271216f1a8ecb" gracePeriod=30 Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.529730 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-cron-29497861-5bd6d"] Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.533348 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-cron-29497861-5bd6d"] Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.539501 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/keystonedfde-account-delete-rzcsj"] Jan 31 15:06:21 crc kubenswrapper[4751]: E0131 15:06:21.539800 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ee66f9-5607-4559-9a64-6767dfbcc078" containerName="proxy-httpd" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.539816 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ee66f9-5607-4559-9a64-6767dfbcc078" containerName="proxy-httpd" Jan 31 15:06:21 crc kubenswrapper[4751]: E0131 15:06:21.539844 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="26ee66f9-5607-4559-9a64-6767dfbcc078" containerName="proxy-server" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.539851 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="26ee66f9-5607-4559-9a64-6767dfbcc078" containerName="proxy-server" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.539988 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ee66f9-5607-4559-9a64-6767dfbcc078" containerName="proxy-server" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.540011 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="26ee66f9-5607-4559-9a64-6767dfbcc078" containerName="proxy-httpd" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.540530 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystonedfde-account-delete-rzcsj" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.545360 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystonedfde-account-delete-rzcsj"] Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.653993 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949efaf4-a5db-405d-9d40-c44d525c603c-operator-scripts\") pod \"keystonedfde-account-delete-rzcsj\" (UID: \"949efaf4-a5db-405d-9d40-c44d525c603c\") " pod="glance-kuttl-tests/keystonedfde-account-delete-rzcsj" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.654135 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lk8gf\" (UniqueName: \"kubernetes.io/projected/949efaf4-a5db-405d-9d40-c44d525c603c-kube-api-access-lk8gf\") pod \"keystonedfde-account-delete-rzcsj\" (UID: \"949efaf4-a5db-405d-9d40-c44d525c603c\") " pod="glance-kuttl-tests/keystonedfde-account-delete-rzcsj" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.755124 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949efaf4-a5db-405d-9d40-c44d525c603c-operator-scripts\") pod \"keystonedfde-account-delete-rzcsj\" (UID: \"949efaf4-a5db-405d-9d40-c44d525c603c\") " pod="glance-kuttl-tests/keystonedfde-account-delete-rzcsj" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.755725 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lk8gf\" (UniqueName: \"kubernetes.io/projected/949efaf4-a5db-405d-9d40-c44d525c603c-kube-api-access-lk8gf\") pod \"keystonedfde-account-delete-rzcsj\" (UID: \"949efaf4-a5db-405d-9d40-c44d525c603c\") " pod="glance-kuttl-tests/keystonedfde-account-delete-rzcsj" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.755889 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949efaf4-a5db-405d-9d40-c44d525c603c-operator-scripts\") pod \"keystonedfde-account-delete-rzcsj\" (UID: \"949efaf4-a5db-405d-9d40-c44d525c603c\") " pod="glance-kuttl-tests/keystonedfde-account-delete-rzcsj" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.777851 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lk8gf\" (UniqueName: \"kubernetes.io/projected/949efaf4-a5db-405d-9d40-c44d525c603c-kube-api-access-lk8gf\") pod \"keystonedfde-account-delete-rzcsj\" (UID: \"949efaf4-a5db-405d-9d40-c44d525c603c\") " pod="glance-kuttl-tests/keystonedfde-account-delete-rzcsj" Jan 31 15:06:21 crc kubenswrapper[4751]: I0131 15:06:21.856154 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystonedfde-account-delete-rzcsj" Jan 31 15:06:22 crc kubenswrapper[4751]: I0131 15:06:22.073757 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/keystonedfde-account-delete-rzcsj"] Jan 31 15:06:22 crc kubenswrapper[4751]: W0131 15:06:22.087875 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod949efaf4_a5db_405d_9d40_c44d525c603c.slice/crio-5c96c33c1fd64acca4de260b260d0cd9dfb53370d0a6e41639488a98be0757ce WatchSource:0}: Error finding container 5c96c33c1fd64acca4de260b260d0cd9dfb53370d0a6e41639488a98be0757ce: Status 404 returned error can't find the container with id 5c96c33c1fd64acca4de260b260d0cd9dfb53370d0a6e41639488a98be0757ce Jan 31 15:06:22 crc kubenswrapper[4751]: I0131 15:06:22.418354 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="041ede36-25a1-4d6d-9de2-d16218c5fc67" path="/var/lib/kubelet/pods/041ede36-25a1-4d6d-9de2-d16218c5fc67/volumes" Jan 31 15:06:22 crc kubenswrapper[4751]: I0131 15:06:22.419358 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="26ee66f9-5607-4559-9a64-6767dfbcc078" path="/var/lib/kubelet/pods/26ee66f9-5607-4559-9a64-6767dfbcc078/volumes" Jan 31 15:06:22 crc kubenswrapper[4751]: I0131 15:06:22.420039 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bce6ceb9-5b0d-4ec7-9492-94dce9bb261d" path="/var/lib/kubelet/pods/bce6ceb9-5b0d-4ec7-9492-94dce9bb261d/volumes" Jan 31 15:06:22 crc kubenswrapper[4751]: I0131 15:06:22.421232 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff5e8bad-e481-445e-99e8-5a5487e908d8" path="/var/lib/kubelet/pods/ff5e8bad-e481-445e-99e8-5a5487e908d8/volumes" Jan 31 15:06:23 crc kubenswrapper[4751]: I0131 15:06:23.079617 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystonedfde-account-delete-rzcsj" event={"ID":"949efaf4-a5db-405d-9d40-c44d525c603c","Type":"ContainerStarted","Data":"5c96c33c1fd64acca4de260b260d0cd9dfb53370d0a6e41639488a98be0757ce"} Jan 31 15:06:24 crc kubenswrapper[4751]: I0131 15:06:24.088761 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystonedfde-account-delete-rzcsj" event={"ID":"949efaf4-a5db-405d-9d40-c44d525c603c","Type":"ContainerStarted","Data":"b5cb3ee4032129b568b4ee0fa56e2f13d4d48986ad6a3c19ca00fa7b56b0e716"} Jan 31 15:06:24 crc kubenswrapper[4751]: I0131 15:06:24.108005 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="glance-kuttl-tests/keystonedfde-account-delete-rzcsj" podStartSLOduration=3.1079861 podStartE2EDuration="3.1079861s" podCreationTimestamp="2026-01-31 15:06:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:06:24.102029461 +0000 UTC m=+1488.476742346" watchObservedRunningTime="2026-01-31 15:06:24.1079861 +0000 UTC m=+1488.482698985" Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.052233 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.052624 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.124040 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.178643 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.360168 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ztzpn"] Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.552729 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["glance-kuttl-tests/root-account-create-update-6tvsv"] Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.553837 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-6tvsv" Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.571693 4751 reflector.go:368] Caches populated for *v1.Secret from object-"glance-kuttl-tests"/"openstack-mariadb-root-db-secret" Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.624618 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/root-account-create-update-6tvsv"] Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.640625 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.645924 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.652081 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.662645 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/root-account-create-update-6tvsv"] Jan 31 15:06:25 crc kubenswrapper[4751]: E0131 15:06:25.663448 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="unmounted volumes=[kube-api-access-sgblm operator-scripts], unattached volumes=[], failed to process volumes=[]: context canceled" pod="glance-kuttl-tests/root-account-create-update-6tvsv" podUID="f6b12715-fb69-4237-ac73-a59a6972d988" Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.730018 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b12715-fb69-4237-ac73-a59a6972d988-operator-scripts\") pod \"root-account-create-update-6tvsv\" (UID: \"f6b12715-fb69-4237-ac73-a59a6972d988\") " pod="glance-kuttl-tests/root-account-create-update-6tvsv" Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.730206 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sgblm\" (UniqueName: \"kubernetes.io/projected/f6b12715-fb69-4237-ac73-a59a6972d988-kube-api-access-sgblm\") pod \"root-account-create-update-6tvsv\" (UID: \"f6b12715-fb69-4237-ac73-a59a6972d988\") " pod="glance-kuttl-tests/root-account-create-update-6tvsv" Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.767793 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/openstack-galera-2" podUID="3fcd9bac-c0cb-4de4-b630-0db07f110da7" containerName="galera" containerID="cri-o://eeb0727f6d7a3d1d251766b50edc1058bc460aa581ba0d5f746de288b9b3f16b" gracePeriod=30 Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.831955 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b12715-fb69-4237-ac73-a59a6972d988-operator-scripts\") pod \"root-account-create-update-6tvsv\" (UID: \"f6b12715-fb69-4237-ac73-a59a6972d988\") " pod="glance-kuttl-tests/root-account-create-update-6tvsv" Jan 31 15:06:25 crc kubenswrapper[4751]: I0131 15:06:25.832137 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgblm\" (UniqueName: \"kubernetes.io/projected/f6b12715-fb69-4237-ac73-a59a6972d988-kube-api-access-sgblm\") pod \"root-account-create-update-6tvsv\" (UID: \"f6b12715-fb69-4237-ac73-a59a6972d988\") " pod="glance-kuttl-tests/root-account-create-update-6tvsv" Jan 31 15:06:25 crc kubenswrapper[4751]: E0131 15:06:25.832148 4751 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 15:06:25 crc kubenswrapper[4751]: E0131 15:06:25.832232 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6b12715-fb69-4237-ac73-a59a6972d988-operator-scripts podName:f6b12715-fb69-4237-ac73-a59a6972d988 nodeName:}" failed. No retries permitted until 2026-01-31 15:06:26.332205098 +0000 UTC m=+1490.706917983 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f6b12715-fb69-4237-ac73-a59a6972d988-operator-scripts") pod "root-account-create-update-6tvsv" (UID: "f6b12715-fb69-4237-ac73-a59a6972d988") : configmap "openstack-scripts" not found Jan 31 15:06:25 crc kubenswrapper[4751]: E0131 15:06:25.840159 4751 projected.go:194] Error preparing data for projected volume kube-api-access-sgblm for pod glance-kuttl-tests/root-account-create-update-6tvsv: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 15:06:25 crc kubenswrapper[4751]: E0131 15:06:25.840255 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6b12715-fb69-4237-ac73-a59a6972d988-kube-api-access-sgblm podName:f6b12715-fb69-4237-ac73-a59a6972d988 nodeName:}" failed. No retries permitted until 2026-01-31 15:06:26.340232202 +0000 UTC m=+1490.714945087 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-sgblm" (UniqueName: "kubernetes.io/projected/f6b12715-fb69-4237-ac73-a59a6972d988-kube-api-access-sgblm") pod "root-account-create-update-6tvsv" (UID: "f6b12715-fb69-4237-ac73-a59a6972d988") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.082025 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.107581 4751 generic.go:334] "Generic (PLEG): container finished" podID="dabb55da-08db-4d2a-8b2d-ac7b2b657053" containerID="6d283eaf7e9a4eadb7f123ebbd0723c09363494de09f8fb76c6271216f1a8ecb" exitCode=0 Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.107690 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystone-859d455469-zqqzw" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.108196 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-859d455469-zqqzw" event={"ID":"dabb55da-08db-4d2a-8b2d-ac7b2b657053","Type":"ContainerDied","Data":"6d283eaf7e9a4eadb7f123ebbd0723c09363494de09f8fb76c6271216f1a8ecb"} Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.108261 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystone-859d455469-zqqzw" event={"ID":"dabb55da-08db-4d2a-8b2d-ac7b2b657053","Type":"ContainerDied","Data":"02e1eb0fcf9c093b28dd6fc9f0fb02613d1865a02336d6e8e82c2fa50f8597a7"} Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.108286 4751 scope.go:117] "RemoveContainer" containerID="6d283eaf7e9a4eadb7f123ebbd0723c09363494de09f8fb76c6271216f1a8ecb" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.110405 4751 generic.go:334] "Generic (PLEG): container finished" podID="949efaf4-a5db-405d-9d40-c44d525c603c" containerID="b5cb3ee4032129b568b4ee0fa56e2f13d4d48986ad6a3c19ca00fa7b56b0e716" exitCode=0 Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.110446 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystonedfde-account-delete-rzcsj" event={"ID":"949efaf4-a5db-405d-9d40-c44d525c603c","Type":"ContainerDied","Data":"b5cb3ee4032129b568b4ee0fa56e2f13d4d48986ad6a3c19ca00fa7b56b0e716"} Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.110531 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-6tvsv" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.130105 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-6tvsv" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.134263 4751 scope.go:117] "RemoveContainer" containerID="6d283eaf7e9a4eadb7f123ebbd0723c09363494de09f8fb76c6271216f1a8ecb" Jan 31 15:06:26 crc kubenswrapper[4751]: E0131 15:06:26.134654 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d283eaf7e9a4eadb7f123ebbd0723c09363494de09f8fb76c6271216f1a8ecb\": container with ID starting with 6d283eaf7e9a4eadb7f123ebbd0723c09363494de09f8fb76c6271216f1a8ecb not found: ID does not exist" containerID="6d283eaf7e9a4eadb7f123ebbd0723c09363494de09f8fb76c6271216f1a8ecb" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.134703 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d283eaf7e9a4eadb7f123ebbd0723c09363494de09f8fb76c6271216f1a8ecb"} err="failed to get container status \"6d283eaf7e9a4eadb7f123ebbd0723c09363494de09f8fb76c6271216f1a8ecb\": rpc error: code = NotFound desc = could not find container \"6d283eaf7e9a4eadb7f123ebbd0723c09363494de09f8fb76c6271216f1a8ecb\": container with ID starting with 6d283eaf7e9a4eadb7f123ebbd0723c09363494de09f8fb76c6271216f1a8ecb not found: ID does not exist" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.141093 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-config-data\") pod \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.141159 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-fernet-keys\") pod \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.141188 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-scripts\") pod \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.141242 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-46qfq\" (UniqueName: \"kubernetes.io/projected/dabb55da-08db-4d2a-8b2d-ac7b2b657053-kube-api-access-46qfq\") pod \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.141323 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-credential-keys\") pod \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\" (UID: \"dabb55da-08db-4d2a-8b2d-ac7b2b657053\") " Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.147185 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-fernet-keys" (OuterVolumeSpecName: "fernet-keys") pod "dabb55da-08db-4d2a-8b2d-ac7b2b657053" (UID: "dabb55da-08db-4d2a-8b2d-ac7b2b657053"). InnerVolumeSpecName "fernet-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.147271 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-credential-keys" (OuterVolumeSpecName: "credential-keys") pod "dabb55da-08db-4d2a-8b2d-ac7b2b657053" (UID: "dabb55da-08db-4d2a-8b2d-ac7b2b657053"). InnerVolumeSpecName "credential-keys". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.147452 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-scripts" (OuterVolumeSpecName: "scripts") pod "dabb55da-08db-4d2a-8b2d-ac7b2b657053" (UID: "dabb55da-08db-4d2a-8b2d-ac7b2b657053"). InnerVolumeSpecName "scripts". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.153180 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dabb55da-08db-4d2a-8b2d-ac7b2b657053-kube-api-access-46qfq" (OuterVolumeSpecName: "kube-api-access-46qfq") pod "dabb55da-08db-4d2a-8b2d-ac7b2b657053" (UID: "dabb55da-08db-4d2a-8b2d-ac7b2b657053"). InnerVolumeSpecName "kube-api-access-46qfq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.165893 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-config-data" (OuterVolumeSpecName: "config-data") pod "dabb55da-08db-4d2a-8b2d-ac7b2b657053" (UID: "dabb55da-08db-4d2a-8b2d-ac7b2b657053"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.211461 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/memcached-0"] Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.211868 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/memcached-0" podUID="9dfaa3fc-8bf7-420f-8581-4e917bf3f41c" containerName="memcached" containerID="cri-o://7d9c0759f36bb098c88e33085270280041e2db4b3aa27d3f10dea45195deff2f" gracePeriod=30 Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.243660 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.243697 4751 reconciler_common.go:293] "Volume detached for volume \"fernet-keys\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-fernet-keys\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.243706 4751 reconciler_common.go:293] "Volume detached for volume \"scripts\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.243715 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-46qfq\" (UniqueName: \"kubernetes.io/projected/dabb55da-08db-4d2a-8b2d-ac7b2b657053-kube-api-access-46qfq\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.243726 4751 reconciler_common.go:293] "Volume detached for volume \"credential-keys\" (UniqueName: \"kubernetes.io/secret/dabb55da-08db-4d2a-8b2d-ac7b2b657053-credential-keys\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.345497 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sgblm\" (UniqueName: \"kubernetes.io/projected/f6b12715-fb69-4237-ac73-a59a6972d988-kube-api-access-sgblm\") pod \"root-account-create-update-6tvsv\" (UID: \"f6b12715-fb69-4237-ac73-a59a6972d988\") " pod="glance-kuttl-tests/root-account-create-update-6tvsv" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.345915 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b12715-fb69-4237-ac73-a59a6972d988-operator-scripts\") pod \"root-account-create-update-6tvsv\" (UID: \"f6b12715-fb69-4237-ac73-a59a6972d988\") " pod="glance-kuttl-tests/root-account-create-update-6tvsv" Jan 31 15:06:26 crc kubenswrapper[4751]: E0131 15:06:26.346108 4751 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-scripts: configmap "openstack-scripts" not found Jan 31 15:06:26 crc kubenswrapper[4751]: E0131 15:06:26.346185 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/f6b12715-fb69-4237-ac73-a59a6972d988-operator-scripts podName:f6b12715-fb69-4237-ac73-a59a6972d988 nodeName:}" failed. No retries permitted until 2026-01-31 15:06:27.346167458 +0000 UTC m=+1491.720880343 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "operator-scripts" (UniqueName: "kubernetes.io/configmap/f6b12715-fb69-4237-ac73-a59a6972d988-operator-scripts") pod "root-account-create-update-6tvsv" (UID: "f6b12715-fb69-4237-ac73-a59a6972d988") : configmap "openstack-scripts" not found Jan 31 15:06:26 crc kubenswrapper[4751]: E0131 15:06:26.348440 4751 projected.go:194] Error preparing data for projected volume kube-api-access-sgblm for pod glance-kuttl-tests/root-account-create-update-6tvsv: failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 15:06:26 crc kubenswrapper[4751]: E0131 15:06:26.348560 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f6b12715-fb69-4237-ac73-a59a6972d988-kube-api-access-sgblm podName:f6b12715-fb69-4237-ac73-a59a6972d988 nodeName:}" failed. No retries permitted until 2026-01-31 15:06:27.348533021 +0000 UTC m=+1491.723245926 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-sgblm" (UniqueName: "kubernetes.io/projected/f6b12715-fb69-4237-ac73-a59a6972d988-kube-api-access-sgblm") pod "root-account-create-update-6tvsv" (UID: "f6b12715-fb69-4237-ac73-a59a6972d988") : failed to fetch token: serviceaccounts "galera-openstack" not found Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.463378 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-859d455469-zqqzw"] Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.469313 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-859d455469-zqqzw"] Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.537120 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-db-create-pl5bs"] Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.542453 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-db-create-pl5bs"] Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.588972 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystonedfde-account-delete-rzcsj"] Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.593553 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd"] Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.598266 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystone-dfde-account-create-update-vbcnd"] Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.633723 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.730396 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.853870 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3fcd9bac-c0cb-4de4-b630-0db07f110da7-config-data-generated\") pod \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.854230 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") pod \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.854242 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3fcd9bac-c0cb-4de4-b630-0db07f110da7-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "3fcd9bac-c0cb-4de4-b630-0db07f110da7" (UID: "3fcd9bac-c0cb-4de4-b630-0db07f110da7"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.854272 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-kolla-config\") pod \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.854387 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-config-data-default\") pod \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.854434 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-operator-scripts\") pod \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.854484 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rwsz6\" (UniqueName: \"kubernetes.io/projected/3fcd9bac-c0cb-4de4-b630-0db07f110da7-kube-api-access-rwsz6\") pod \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\" (UID: \"3fcd9bac-c0cb-4de4-b630-0db07f110da7\") " Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.854855 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/3fcd9bac-c0cb-4de4-b630-0db07f110da7-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.855286 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "3fcd9bac-c0cb-4de4-b630-0db07f110da7" (UID: "3fcd9bac-c0cb-4de4-b630-0db07f110da7"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.855334 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "3fcd9bac-c0cb-4de4-b630-0db07f110da7" (UID: "3fcd9bac-c0cb-4de4-b630-0db07f110da7"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.855346 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "3fcd9bac-c0cb-4de4-b630-0db07f110da7" (UID: "3fcd9bac-c0cb-4de4-b630-0db07f110da7"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.858303 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3fcd9bac-c0cb-4de4-b630-0db07f110da7-kube-api-access-rwsz6" (OuterVolumeSpecName: "kube-api-access-rwsz6") pod "3fcd9bac-c0cb-4de4-b630-0db07f110da7" (UID: "3fcd9bac-c0cb-4de4-b630-0db07f110da7"). InnerVolumeSpecName "kube-api-access-rwsz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.862478 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage12-crc" (OuterVolumeSpecName: "mysql-db") pod "3fcd9bac-c0cb-4de4-b630-0db07f110da7" (UID: "3fcd9bac-c0cb-4de4-b630-0db07f110da7"). InnerVolumeSpecName "local-storage12-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.956619 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" " Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.956677 4751 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.956700 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.956718 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/3fcd9bac-c0cb-4de4-b630-0db07f110da7-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.956734 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rwsz6\" (UniqueName: \"kubernetes.io/projected/3fcd9bac-c0cb-4de4-b630-0db07f110da7-kube-api-access-rwsz6\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:26 crc kubenswrapper[4751]: I0131 15:06:26.972626 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage12-crc" (UniqueName: "kubernetes.io/local-volume/local-storage12-crc") on node "crc" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.051451 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.058636 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage12-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage12-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.124828 4751 generic.go:334] "Generic (PLEG): container finished" podID="3fcd9bac-c0cb-4de4-b630-0db07f110da7" containerID="eeb0727f6d7a3d1d251766b50edc1058bc460aa581ba0d5f746de288b9b3f16b" exitCode=0 Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.124887 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-2" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.124909 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"3fcd9bac-c0cb-4de4-b630-0db07f110da7","Type":"ContainerDied","Data":"eeb0727f6d7a3d1d251766b50edc1058bc460aa581ba0d5f746de288b9b3f16b"} Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.124940 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-2" event={"ID":"3fcd9bac-c0cb-4de4-b630-0db07f110da7","Type":"ContainerDied","Data":"4483e874a8f4e15e4dfcdca687206a7af35257a8c5ba1cb56d33195e769924f9"} Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.124955 4751 scope.go:117] "RemoveContainer" containerID="eeb0727f6d7a3d1d251766b50edc1058bc460aa581ba0d5f746de288b9b3f16b" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.125117 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/root-account-create-update-6tvsv" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.125915 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-ztzpn" podUID="d783dd01-73a7-4362-888a-ab84bc8739df" containerName="registry-server" containerID="cri-o://8a8f6ec3fc4799718a2c776fd8b2c60694522c37afe696834e35482b1037e761" gracePeriod=2 Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.152910 4751 scope.go:117] "RemoveContainer" containerID="b0d3ea91f474d5f0241c4f1e0b20927cdf5d85e229fe91747902d0e90daf242d" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.171956 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/rabbitmq-server-0" podUID="19317a08-b18b-42c9-bdc9-394e1e06257d" containerName="rabbitmq" containerID="cri-o://07f687eb09cbc17ef2ede020cb3e1c35352131bf2222486a5b70524349e266f9" gracePeriod=604800 Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.182504 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/root-account-create-update-6tvsv"] Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.196248 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/root-account-create-update-6tvsv"] Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.217243 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.227368 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/openstack-galera-2"] Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.232742 4751 scope.go:117] "RemoveContainer" containerID="eeb0727f6d7a3d1d251766b50edc1058bc460aa581ba0d5f746de288b9b3f16b" Jan 31 15:06:27 crc kubenswrapper[4751]: E0131 15:06:27.233065 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eeb0727f6d7a3d1d251766b50edc1058bc460aa581ba0d5f746de288b9b3f16b\": container with ID starting with eeb0727f6d7a3d1d251766b50edc1058bc460aa581ba0d5f746de288b9b3f16b not found: ID does not exist" containerID="eeb0727f6d7a3d1d251766b50edc1058bc460aa581ba0d5f746de288b9b3f16b" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.233122 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eeb0727f6d7a3d1d251766b50edc1058bc460aa581ba0d5f746de288b9b3f16b"} err="failed to get container status \"eeb0727f6d7a3d1d251766b50edc1058bc460aa581ba0d5f746de288b9b3f16b\": rpc error: code = NotFound desc = could not find container \"eeb0727f6d7a3d1d251766b50edc1058bc460aa581ba0d5f746de288b9b3f16b\": container with ID starting with eeb0727f6d7a3d1d251766b50edc1058bc460aa581ba0d5f746de288b9b3f16b not found: ID does not exist" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.233149 4751 scope.go:117] "RemoveContainer" containerID="b0d3ea91f474d5f0241c4f1e0b20927cdf5d85e229fe91747902d0e90daf242d" Jan 31 15:06:27 crc kubenswrapper[4751]: E0131 15:06:27.233559 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b0d3ea91f474d5f0241c4f1e0b20927cdf5d85e229fe91747902d0e90daf242d\": container with ID starting with b0d3ea91f474d5f0241c4f1e0b20927cdf5d85e229fe91747902d0e90daf242d not found: ID does not exist" containerID="b0d3ea91f474d5f0241c4f1e0b20927cdf5d85e229fe91747902d0e90daf242d" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.233598 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b0d3ea91f474d5f0241c4f1e0b20927cdf5d85e229fe91747902d0e90daf242d"} err="failed to get container status \"b0d3ea91f474d5f0241c4f1e0b20927cdf5d85e229fe91747902d0e90daf242d\": rpc error: code = NotFound desc = could not find container \"b0d3ea91f474d5f0241c4f1e0b20927cdf5d85e229fe91747902d0e90daf242d\": container with ID starting with b0d3ea91f474d5f0241c4f1e0b20927cdf5d85e229fe91747902d0e90daf242d not found: ID does not exist" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.363362 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/f6b12715-fb69-4237-ac73-a59a6972d988-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.363401 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgblm\" (UniqueName: \"kubernetes.io/projected/f6b12715-fb69-4237-ac73-a59a6972d988-kube-api-access-sgblm\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.445399 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystonedfde-account-delete-rzcsj" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.566821 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lk8gf\" (UniqueName: \"kubernetes.io/projected/949efaf4-a5db-405d-9d40-c44d525c603c-kube-api-access-lk8gf\") pod \"949efaf4-a5db-405d-9d40-c44d525c603c\" (UID: \"949efaf4-a5db-405d-9d40-c44d525c603c\") " Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.566979 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949efaf4-a5db-405d-9d40-c44d525c603c-operator-scripts\") pod \"949efaf4-a5db-405d-9d40-c44d525c603c\" (UID: \"949efaf4-a5db-405d-9d40-c44d525c603c\") " Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.567549 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/949efaf4-a5db-405d-9d40-c44d525c603c-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "949efaf4-a5db-405d-9d40-c44d525c603c" (UID: "949efaf4-a5db-405d-9d40-c44d525c603c"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.576243 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/949efaf4-a5db-405d-9d40-c44d525c603c-kube-api-access-lk8gf" (OuterVolumeSpecName: "kube-api-access-lk8gf") pod "949efaf4-a5db-405d-9d40-c44d525c603c" (UID: "949efaf4-a5db-405d-9d40-c44d525c603c"). InnerVolumeSpecName "kube-api-access-lk8gf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.669226 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/949efaf4-a5db-405d-9d40-c44d525c603c-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.669273 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lk8gf\" (UniqueName: \"kubernetes.io/projected/949efaf4-a5db-405d-9d40-c44d525c603c-kube-api-access-lk8gf\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.841614 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz"] Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.841851 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" podUID="f70443db-a342-4f5d-81b2-39c01f494cf8" containerName="manager" containerID="cri-o://ab946ef56298d90f2da08c7aa03dc9761afb66c0a527a34685eef2375ecebd56" gracePeriod=10 Jan 31 15:06:27 crc kubenswrapper[4751]: I0131 15:06:27.856417 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/openstack-galera-1" podUID="22459bcc-672e-4390-89ae-2b5fa48ded71" containerName="galera" containerID="cri-o://6234bbbcfac3eddf715e4285a6b3d7b6a0aff6d850ad2df858a2deee34d9f571" gracePeriod=28 Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.112610 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-index-bvvpv"] Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.113166 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/glance-operator-index-bvvpv" podUID="eacc0c6c-95c4-487f-945e-4a1e3e17c508" containerName="registry-server" containerID="cri-o://432c19d38a03c6f3813c47fccb7f600a3290b65e75e89319b377dead29257091" gracePeriod=30 Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.147865 4751 generic.go:334] "Generic (PLEG): container finished" podID="d783dd01-73a7-4362-888a-ab84bc8739df" containerID="8a8f6ec3fc4799718a2c776fd8b2c60694522c37afe696834e35482b1037e761" exitCode=0 Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.147936 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztzpn" event={"ID":"d783dd01-73a7-4362-888a-ab84bc8739df","Type":"ContainerDied","Data":"8a8f6ec3fc4799718a2c776fd8b2c60694522c37afe696834e35482b1037e761"} Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.152661 4751 generic.go:334] "Generic (PLEG): container finished" podID="f70443db-a342-4f5d-81b2-39c01f494cf8" containerID="ab946ef56298d90f2da08c7aa03dc9761afb66c0a527a34685eef2375ecebd56" exitCode=0 Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.152751 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" event={"ID":"f70443db-a342-4f5d-81b2-39c01f494cf8","Type":"ContainerDied","Data":"ab946ef56298d90f2da08c7aa03dc9761afb66c0a527a34685eef2375ecebd56"} Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.159954 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp"] Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.160946 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/keystonedfde-account-delete-rzcsj" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.161115 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/keystonedfde-account-delete-rzcsj" event={"ID":"949efaf4-a5db-405d-9d40-c44d525c603c","Type":"ContainerDied","Data":"5c96c33c1fd64acca4de260b260d0cd9dfb53370d0a6e41639488a98be0757ce"} Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.161153 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5c96c33c1fd64acca4de260b260d0cd9dfb53370d0a6e41639488a98be0757ce" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.162737 4751 generic.go:334] "Generic (PLEG): container finished" podID="9dfaa3fc-8bf7-420f-8581-4e917bf3f41c" containerID="7d9c0759f36bb098c88e33085270280041e2db4b3aa27d3f10dea45195deff2f" exitCode=0 Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.162772 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c","Type":"ContainerDied","Data":"7d9c0759f36bb098c88e33085270280041e2db4b3aa27d3f10dea45195deff2f"} Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.171233 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9a978acee48f3b64f3e45376f243bba270f67c38c120bc2488cd2a78caspwrp"] Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.293660 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/keystonedfde-account-delete-rzcsj"] Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.302919 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/keystonedfde-account-delete-rzcsj"] Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.428857 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="06a47516-5cf6-431b-86ee-7732bd88fed4" path="/var/lib/kubelet/pods/06a47516-5cf6-431b-86ee-7732bd88fed4/volumes" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.429637 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.429649 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3fcd9bac-c0cb-4de4-b630-0db07f110da7" path="/var/lib/kubelet/pods/3fcd9bac-c0cb-4de4-b630-0db07f110da7/volumes" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.430233 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="568d26c9-1fe8-4e01-a7c0-cbe91951fe60" path="/var/lib/kubelet/pods/568d26c9-1fe8-4e01-a7c0-cbe91951fe60/volumes" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.431351 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="585f0c4b-3594-4683-bb38-d1fcbbee12cd" path="/var/lib/kubelet/pods/585f0c4b-3594-4683-bb38-d1fcbbee12cd/volumes" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.431937 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="949efaf4-a5db-405d-9d40-c44d525c603c" path="/var/lib/kubelet/pods/949efaf4-a5db-405d-9d40-c44d525c603c/volumes" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.432402 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dabb55da-08db-4d2a-8b2d-ac7b2b657053" path="/var/lib/kubelet/pods/dabb55da-08db-4d2a-8b2d-ac7b2b657053/volumes" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.433204 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6b12715-fb69-4237-ac73-a59a6972d988" path="/var/lib/kubelet/pods/f6b12715-fb69-4237-ac73-a59a6972d988/volumes" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.578952 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-kolla-config\") pod \"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c\" (UID: \"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c\") " Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.579062 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gbtbr\" (UniqueName: \"kubernetes.io/projected/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-kube-api-access-gbtbr\") pod \"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c\" (UID: \"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c\") " Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.579200 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-config-data\") pod \"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c\" (UID: \"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c\") " Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.580907 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "9dfaa3fc-8bf7-420f-8581-4e917bf3f41c" (UID: "9dfaa3fc-8bf7-420f-8581-4e917bf3f41c"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.583221 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-config-data" (OuterVolumeSpecName: "config-data") pod "9dfaa3fc-8bf7-420f-8581-4e917bf3f41c" (UID: "9dfaa3fc-8bf7-420f-8581-4e917bf3f41c"). InnerVolumeSpecName "config-data". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.598330 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-kube-api-access-gbtbr" (OuterVolumeSpecName: "kube-api-access-gbtbr") pod "9dfaa3fc-8bf7-420f-8581-4e917bf3f41c" (UID: "9dfaa3fc-8bf7-420f-8581-4e917bf3f41c"). InnerVolumeSpecName "kube-api-access-gbtbr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.681051 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gbtbr\" (UniqueName: \"kubernetes.io/projected/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-kube-api-access-gbtbr\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.681101 4751 reconciler_common.go:293] "Volume detached for volume \"config-data\" (UniqueName: \"kubernetes.io/configmap/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-config-data\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.681111 4751 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.725269 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.739487 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-bvvpv" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.806524 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.883407 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d783dd01-73a7-4362-888a-ab84bc8739df-catalog-content\") pod \"d783dd01-73a7-4362-888a-ab84bc8739df\" (UID: \"d783dd01-73a7-4362-888a-ab84bc8739df\") " Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.883462 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f70443db-a342-4f5d-81b2-39c01f494cf8-apiservice-cert\") pod \"f70443db-a342-4f5d-81b2-39c01f494cf8\" (UID: \"f70443db-a342-4f5d-81b2-39c01f494cf8\") " Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.883543 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f70443db-a342-4f5d-81b2-39c01f494cf8-webhook-cert\") pod \"f70443db-a342-4f5d-81b2-39c01f494cf8\" (UID: \"f70443db-a342-4f5d-81b2-39c01f494cf8\") " Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.883568 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sgqfs\" (UniqueName: \"kubernetes.io/projected/d783dd01-73a7-4362-888a-ab84bc8739df-kube-api-access-sgqfs\") pod \"d783dd01-73a7-4362-888a-ab84bc8739df\" (UID: \"d783dd01-73a7-4362-888a-ab84bc8739df\") " Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.883614 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l6f7g\" (UniqueName: \"kubernetes.io/projected/f70443db-a342-4f5d-81b2-39c01f494cf8-kube-api-access-l6f7g\") pod \"f70443db-a342-4f5d-81b2-39c01f494cf8\" (UID: \"f70443db-a342-4f5d-81b2-39c01f494cf8\") " Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.883638 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d783dd01-73a7-4362-888a-ab84bc8739df-utilities\") pod \"d783dd01-73a7-4362-888a-ab84bc8739df\" (UID: \"d783dd01-73a7-4362-888a-ab84bc8739df\") " Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.883659 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtvgv\" (UniqueName: \"kubernetes.io/projected/eacc0c6c-95c4-487f-945e-4a1e3e17c508-kube-api-access-vtvgv\") pod \"eacc0c6c-95c4-487f-945e-4a1e3e17c508\" (UID: \"eacc0c6c-95c4-487f-945e-4a1e3e17c508\") " Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.884564 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d783dd01-73a7-4362-888a-ab84bc8739df-utilities" (OuterVolumeSpecName: "utilities") pod "d783dd01-73a7-4362-888a-ab84bc8739df" (UID: "d783dd01-73a7-4362-888a-ab84bc8739df"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.888063 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f70443db-a342-4f5d-81b2-39c01f494cf8-kube-api-access-l6f7g" (OuterVolumeSpecName: "kube-api-access-l6f7g") pod "f70443db-a342-4f5d-81b2-39c01f494cf8" (UID: "f70443db-a342-4f5d-81b2-39c01f494cf8"). InnerVolumeSpecName "kube-api-access-l6f7g". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.888313 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d783dd01-73a7-4362-888a-ab84bc8739df-kube-api-access-sgqfs" (OuterVolumeSpecName: "kube-api-access-sgqfs") pod "d783dd01-73a7-4362-888a-ab84bc8739df" (UID: "d783dd01-73a7-4362-888a-ab84bc8739df"). InnerVolumeSpecName "kube-api-access-sgqfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.888820 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f70443db-a342-4f5d-81b2-39c01f494cf8-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "f70443db-a342-4f5d-81b2-39c01f494cf8" (UID: "f70443db-a342-4f5d-81b2-39c01f494cf8"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.889895 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eacc0c6c-95c4-487f-945e-4a1e3e17c508-kube-api-access-vtvgv" (OuterVolumeSpecName: "kube-api-access-vtvgv") pod "eacc0c6c-95c4-487f-945e-4a1e3e17c508" (UID: "eacc0c6c-95c4-487f-945e-4a1e3e17c508"). InnerVolumeSpecName "kube-api-access-vtvgv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.890622 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f70443db-a342-4f5d-81b2-39c01f494cf8-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "f70443db-a342-4f5d-81b2-39c01f494cf8" (UID: "f70443db-a342-4f5d-81b2-39c01f494cf8"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.985321 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtvgv\" (UniqueName: \"kubernetes.io/projected/eacc0c6c-95c4-487f-945e-4a1e3e17c508-kube-api-access-vtvgv\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.985361 4751 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/f70443db-a342-4f5d-81b2-39c01f494cf8-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.985374 4751 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/f70443db-a342-4f5d-81b2-39c01f494cf8-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.985387 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sgqfs\" (UniqueName: \"kubernetes.io/projected/d783dd01-73a7-4362-888a-ab84bc8739df-kube-api-access-sgqfs\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.985399 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-l6f7g\" (UniqueName: \"kubernetes.io/projected/f70443db-a342-4f5d-81b2-39c01f494cf8-kube-api-access-l6f7g\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:28 crc kubenswrapper[4751]: I0131 15:06:28.985411 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d783dd01-73a7-4362-888a-ab84bc8739df-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.010861 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d783dd01-73a7-4362-888a-ab84bc8739df-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d783dd01-73a7-4362-888a-ab84bc8739df" (UID: "d783dd01-73a7-4362-888a-ab84bc8739df"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.086556 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d783dd01-73a7-4362-888a-ab84bc8739df-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.172549 4751 generic.go:334] "Generic (PLEG): container finished" podID="eacc0c6c-95c4-487f-945e-4a1e3e17c508" containerID="432c19d38a03c6f3813c47fccb7f600a3290b65e75e89319b377dead29257091" exitCode=0 Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.172602 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-bvvpv" event={"ID":"eacc0c6c-95c4-487f-945e-4a1e3e17c508","Type":"ContainerDied","Data":"432c19d38a03c6f3813c47fccb7f600a3290b65e75e89319b377dead29257091"} Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.172625 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-index-bvvpv" event={"ID":"eacc0c6c-95c4-487f-945e-4a1e3e17c508","Type":"ContainerDied","Data":"701664b77023940ba4b0968a1f7dc87bd2c93fe4b8f5f2f39b4e39a24e4b2f4b"} Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.172644 4751 scope.go:117] "RemoveContainer" containerID="432c19d38a03c6f3813c47fccb7f600a3290b65e75e89319b377dead29257091" Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.172714 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-index-bvvpv" Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.183590 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/memcached-0" Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.183665 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/memcached-0" event={"ID":"9dfaa3fc-8bf7-420f-8581-4e917bf3f41c","Type":"ContainerDied","Data":"cf904354b92714c266cf175421ba71e5ed9cb49d7ba4bbc0c72df9a09635ce8a"} Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.194047 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-ztzpn" event={"ID":"d783dd01-73a7-4362-888a-ab84bc8739df","Type":"ContainerDied","Data":"f8e2ea7f77972f236bec476d7b7bb124f32cd9d091fcbabec970fc3dd4a6de6c"} Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.194164 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-ztzpn" Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.208516 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" event={"ID":"f70443db-a342-4f5d-81b2-39c01f494cf8","Type":"ContainerDied","Data":"1eff2ed52d31a6cb86d6cac75fe9fb2899624e91687b3dbe55c93d71e4cef517"} Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.208596 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz" Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.212328 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-index-bvvpv"] Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.215475 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/glance-operator-index-bvvpv"] Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.244527 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-ztzpn"] Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.250150 4751 scope.go:117] "RemoveContainer" containerID="432c19d38a03c6f3813c47fccb7f600a3290b65e75e89319b377dead29257091" Jan 31 15:06:29 crc kubenswrapper[4751]: E0131 15:06:29.250652 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"432c19d38a03c6f3813c47fccb7f600a3290b65e75e89319b377dead29257091\": container with ID starting with 432c19d38a03c6f3813c47fccb7f600a3290b65e75e89319b377dead29257091 not found: ID does not exist" containerID="432c19d38a03c6f3813c47fccb7f600a3290b65e75e89319b377dead29257091" Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.250683 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"432c19d38a03c6f3813c47fccb7f600a3290b65e75e89319b377dead29257091"} err="failed to get container status \"432c19d38a03c6f3813c47fccb7f600a3290b65e75e89319b377dead29257091\": rpc error: code = NotFound desc = could not find container \"432c19d38a03c6f3813c47fccb7f600a3290b65e75e89319b377dead29257091\": container with ID starting with 432c19d38a03c6f3813c47fccb7f600a3290b65e75e89319b377dead29257091 not found: ID does not exist" Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.250704 4751 scope.go:117] "RemoveContainer" containerID="7d9c0759f36bb098c88e33085270280041e2db4b3aa27d3f10dea45195deff2f" Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.254103 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-ztzpn"] Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.272650 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/memcached-0"] Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.286193 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/memcached-0"] Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.290865 4751 scope.go:117] "RemoveContainer" containerID="8a8f6ec3fc4799718a2c776fd8b2c60694522c37afe696834e35482b1037e761" Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.293841 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz"] Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.300703 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/glance-operator-controller-manager-75dc47fc9-v4thz"] Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.340685 4751 scope.go:117] "RemoveContainer" containerID="c9f9f3a04268cfbac7a889faf5708fdd7ab535489380c76f269ae48567d562f0" Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.360211 4751 scope.go:117] "RemoveContainer" containerID="568906c2cc7feff3ba674be852dca9f1ba04b313f69bf113705a16e3309aa4da" Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.382018 4751 scope.go:117] "RemoveContainer" containerID="ab946ef56298d90f2da08c7aa03dc9761afb66c0a527a34685eef2375ecebd56" Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.875925 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="glance-kuttl-tests/openstack-galera-0" podUID="07a2906d-db30-4578-8b1e-088ca2f20ced" containerName="galera" containerID="cri-o://0e2fc16f141e03061cef807a2713d8f66a6c5d9ed59205690727526ba6a882ea" gracePeriod=26 Jan 31 15:06:29 crc kubenswrapper[4751]: I0131 15:06:29.995521 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.099586 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/22459bcc-672e-4390-89ae-2b5fa48ded71-config-data-generated\") pod \"22459bcc-672e-4390-89ae-2b5fa48ded71\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.099642 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-config-data-default\") pod \"22459bcc-672e-4390-89ae-2b5fa48ded71\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.099789 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") pod \"22459bcc-672e-4390-89ae-2b5fa48ded71\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.099917 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-kolla-config\") pod \"22459bcc-672e-4390-89ae-2b5fa48ded71\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.099986 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-operator-scripts\") pod \"22459bcc-672e-4390-89ae-2b5fa48ded71\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.100044 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nj4k\" (UniqueName: \"kubernetes.io/projected/22459bcc-672e-4390-89ae-2b5fa48ded71-kube-api-access-5nj4k\") pod \"22459bcc-672e-4390-89ae-2b5fa48ded71\" (UID: \"22459bcc-672e-4390-89ae-2b5fa48ded71\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.100263 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/22459bcc-672e-4390-89ae-2b5fa48ded71-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "22459bcc-672e-4390-89ae-2b5fa48ded71" (UID: "22459bcc-672e-4390-89ae-2b5fa48ded71"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.100480 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "22459bcc-672e-4390-89ae-2b5fa48ded71" (UID: "22459bcc-672e-4390-89ae-2b5fa48ded71"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.100564 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "22459bcc-672e-4390-89ae-2b5fa48ded71" (UID: "22459bcc-672e-4390-89ae-2b5fa48ded71"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.100674 4751 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.100690 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/22459bcc-672e-4390-89ae-2b5fa48ded71-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.100724 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.101234 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "22459bcc-672e-4390-89ae-2b5fa48ded71" (UID: "22459bcc-672e-4390-89ae-2b5fa48ded71"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.107268 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22459bcc-672e-4390-89ae-2b5fa48ded71-kube-api-access-5nj4k" (OuterVolumeSpecName: "kube-api-access-5nj4k") pod "22459bcc-672e-4390-89ae-2b5fa48ded71" (UID: "22459bcc-672e-4390-89ae-2b5fa48ded71"). InnerVolumeSpecName "kube-api-access-5nj4k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.111766 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage06-crc" (OuterVolumeSpecName: "mysql-db") pod "22459bcc-672e-4390-89ae-2b5fa48ded71" (UID: "22459bcc-672e-4390-89ae-2b5fa48ded71"). InnerVolumeSpecName "local-storage06-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.170992 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.202444 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.203452 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/22459bcc-672e-4390-89ae-2b5fa48ded71-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.203544 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nj4k\" (UniqueName: \"kubernetes.io/projected/22459bcc-672e-4390-89ae-2b5fa48ded71-kube-api-access-5nj4k\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.215173 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage06-crc" (UniqueName: "kubernetes.io/local-volume/local-storage06-crc") on node "crc" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.231500 4751 generic.go:334] "Generic (PLEG): container finished" podID="22459bcc-672e-4390-89ae-2b5fa48ded71" containerID="6234bbbcfac3eddf715e4285a6b3d7b6a0aff6d850ad2df858a2deee34d9f571" exitCode=0 Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.231569 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"22459bcc-672e-4390-89ae-2b5fa48ded71","Type":"ContainerDied","Data":"6234bbbcfac3eddf715e4285a6b3d7b6a0aff6d850ad2df858a2deee34d9f571"} Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.231596 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-1" event={"ID":"22459bcc-672e-4390-89ae-2b5fa48ded71","Type":"ContainerDied","Data":"6b6faf7aa73840af2027f08065efac105f4b0ad43c2d2c60890bf024de99e2ca"} Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.231615 4751 scope.go:117] "RemoveContainer" containerID="6234bbbcfac3eddf715e4285a6b3d7b6a0aff6d850ad2df858a2deee34d9f571" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.231703 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-1" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.239377 4751 generic.go:334] "Generic (PLEG): container finished" podID="19317a08-b18b-42c9-bdc9-394e1e06257d" containerID="07f687eb09cbc17ef2ede020cb3e1c35352131bf2222486a5b70524349e266f9" exitCode=0 Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.239445 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"19317a08-b18b-42c9-bdc9-394e1e06257d","Type":"ContainerDied","Data":"07f687eb09cbc17ef2ede020cb3e1c35352131bf2222486a5b70524349e266f9"} Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.239470 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/rabbitmq-server-0" event={"ID":"19317a08-b18b-42c9-bdc9-394e1e06257d","Type":"ContainerDied","Data":"f6c134f960dca8717c0eb288c9e0a54cef2dc5968f5f68b04744d850b9ec573e"} Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.239534 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/rabbitmq-server-0" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.270107 4751 scope.go:117] "RemoveContainer" containerID="0844a74085d7d943d717fb7babb3a7b7db796dff92dfd7c3894a2eccb22eb987" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.296184 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.302157 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/openstack-galera-1"] Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.306433 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19317a08-b18b-42c9-bdc9-394e1e06257d-plugins-conf\") pod \"19317a08-b18b-42c9-bdc9-394e1e06257d\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.306473 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-erlang-cookie\") pod \"19317a08-b18b-42c9-bdc9-394e1e06257d\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.306529 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-plugins\") pod \"19317a08-b18b-42c9-bdc9-394e1e06257d\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.306567 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-confd\") pod \"19317a08-b18b-42c9-bdc9-394e1e06257d\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.306604 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19317a08-b18b-42c9-bdc9-394e1e06257d-erlang-cookie-secret\") pod \"19317a08-b18b-42c9-bdc9-394e1e06257d\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.306742 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"persistence\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f145c232-830a-4841-bd1f-7c42e25cd443\") pod \"19317a08-b18b-42c9-bdc9-394e1e06257d\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.306787 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19317a08-b18b-42c9-bdc9-394e1e06257d-pod-info\") pod \"19317a08-b18b-42c9-bdc9-394e1e06257d\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.306806 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zrbjl\" (UniqueName: \"kubernetes.io/projected/19317a08-b18b-42c9-bdc9-394e1e06257d-kube-api-access-zrbjl\") pod \"19317a08-b18b-42c9-bdc9-394e1e06257d\" (UID: \"19317a08-b18b-42c9-bdc9-394e1e06257d\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.307027 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage06-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage06-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.307089 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/19317a08-b18b-42c9-bdc9-394e1e06257d-plugins-conf" (OuterVolumeSpecName: "plugins-conf") pod "19317a08-b18b-42c9-bdc9-394e1e06257d" (UID: "19317a08-b18b-42c9-bdc9-394e1e06257d"). InnerVolumeSpecName "plugins-conf". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.307131 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-plugins" (OuterVolumeSpecName: "rabbitmq-plugins") pod "19317a08-b18b-42c9-bdc9-394e1e06257d" (UID: "19317a08-b18b-42c9-bdc9-394e1e06257d"). InnerVolumeSpecName "rabbitmq-plugins". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.307977 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-erlang-cookie" (OuterVolumeSpecName: "rabbitmq-erlang-cookie") pod "19317a08-b18b-42c9-bdc9-394e1e06257d" (UID: "19317a08-b18b-42c9-bdc9-394e1e06257d"). InnerVolumeSpecName "rabbitmq-erlang-cookie". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.310416 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19317a08-b18b-42c9-bdc9-394e1e06257d-kube-api-access-zrbjl" (OuterVolumeSpecName: "kube-api-access-zrbjl") pod "19317a08-b18b-42c9-bdc9-394e1e06257d" (UID: "19317a08-b18b-42c9-bdc9-394e1e06257d"). InnerVolumeSpecName "kube-api-access-zrbjl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.315252 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/downward-api/19317a08-b18b-42c9-bdc9-394e1e06257d-pod-info" (OuterVolumeSpecName: "pod-info") pod "19317a08-b18b-42c9-bdc9-394e1e06257d" (UID: "19317a08-b18b-42c9-bdc9-394e1e06257d"). InnerVolumeSpecName "pod-info". PluginName "kubernetes.io/downward-api", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.318743 4751 scope.go:117] "RemoveContainer" containerID="6234bbbcfac3eddf715e4285a6b3d7b6a0aff6d850ad2df858a2deee34d9f571" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.319827 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-controller-manager-59595cd-9djr5"] Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.320018 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" podUID="9f3dfaad-d451-448b-a447-47fc7bbff0e5" containerName="manager" containerID="cri-o://668d137892e68f6f4b2298a804a817a3b16a09cc4c85201f8a03fca82e38e755" gracePeriod=10 Jan 31 15:06:30 crc kubenswrapper[4751]: E0131 15:06:30.321625 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6234bbbcfac3eddf715e4285a6b3d7b6a0aff6d850ad2df858a2deee34d9f571\": container with ID starting with 6234bbbcfac3eddf715e4285a6b3d7b6a0aff6d850ad2df858a2deee34d9f571 not found: ID does not exist" containerID="6234bbbcfac3eddf715e4285a6b3d7b6a0aff6d850ad2df858a2deee34d9f571" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.321678 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6234bbbcfac3eddf715e4285a6b3d7b6a0aff6d850ad2df858a2deee34d9f571"} err="failed to get container status \"6234bbbcfac3eddf715e4285a6b3d7b6a0aff6d850ad2df858a2deee34d9f571\": rpc error: code = NotFound desc = could not find container \"6234bbbcfac3eddf715e4285a6b3d7b6a0aff6d850ad2df858a2deee34d9f571\": container with ID starting with 6234bbbcfac3eddf715e4285a6b3d7b6a0aff6d850ad2df858a2deee34d9f571 not found: ID does not exist" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.321702 4751 scope.go:117] "RemoveContainer" containerID="0844a74085d7d943d717fb7babb3a7b7db796dff92dfd7c3894a2eccb22eb987" Jan 31 15:06:30 crc kubenswrapper[4751]: E0131 15:06:30.323584 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0844a74085d7d943d717fb7babb3a7b7db796dff92dfd7c3894a2eccb22eb987\": container with ID starting with 0844a74085d7d943d717fb7babb3a7b7db796dff92dfd7c3894a2eccb22eb987 not found: ID does not exist" containerID="0844a74085d7d943d717fb7babb3a7b7db796dff92dfd7c3894a2eccb22eb987" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.323615 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0844a74085d7d943d717fb7babb3a7b7db796dff92dfd7c3894a2eccb22eb987"} err="failed to get container status \"0844a74085d7d943d717fb7babb3a7b7db796dff92dfd7c3894a2eccb22eb987\": rpc error: code = NotFound desc = could not find container \"0844a74085d7d943d717fb7babb3a7b7db796dff92dfd7c3894a2eccb22eb987\": container with ID starting with 0844a74085d7d943d717fb7babb3a7b7db796dff92dfd7c3894a2eccb22eb987 not found: ID does not exist" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.323635 4751 scope.go:117] "RemoveContainer" containerID="07f687eb09cbc17ef2ede020cb3e1c35352131bf2222486a5b70524349e266f9" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.324799 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/19317a08-b18b-42c9-bdc9-394e1e06257d-erlang-cookie-secret" (OuterVolumeSpecName: "erlang-cookie-secret") pod "19317a08-b18b-42c9-bdc9-394e1e06257d" (UID: "19317a08-b18b-42c9-bdc9-394e1e06257d"). InnerVolumeSpecName "erlang-cookie-secret". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.332526 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f145c232-830a-4841-bd1f-7c42e25cd443" (OuterVolumeSpecName: "persistence") pod "19317a08-b18b-42c9-bdc9-394e1e06257d" (UID: "19317a08-b18b-42c9-bdc9-394e1e06257d"). InnerVolumeSpecName "pvc-f145c232-830a-4841-bd1f-7c42e25cd443". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.349028 4751 scope.go:117] "RemoveContainer" containerID="505748b1e10e777b66b173a6705d54ff333de5b60e6cd125a1cf81bd7167e586" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.379324 4751 scope.go:117] "RemoveContainer" containerID="07f687eb09cbc17ef2ede020cb3e1c35352131bf2222486a5b70524349e266f9" Jan 31 15:06:30 crc kubenswrapper[4751]: E0131 15:06:30.379613 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07f687eb09cbc17ef2ede020cb3e1c35352131bf2222486a5b70524349e266f9\": container with ID starting with 07f687eb09cbc17ef2ede020cb3e1c35352131bf2222486a5b70524349e266f9 not found: ID does not exist" containerID="07f687eb09cbc17ef2ede020cb3e1c35352131bf2222486a5b70524349e266f9" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.379638 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07f687eb09cbc17ef2ede020cb3e1c35352131bf2222486a5b70524349e266f9"} err="failed to get container status \"07f687eb09cbc17ef2ede020cb3e1c35352131bf2222486a5b70524349e266f9\": rpc error: code = NotFound desc = could not find container \"07f687eb09cbc17ef2ede020cb3e1c35352131bf2222486a5b70524349e266f9\": container with ID starting with 07f687eb09cbc17ef2ede020cb3e1c35352131bf2222486a5b70524349e266f9 not found: ID does not exist" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.379656 4751 scope.go:117] "RemoveContainer" containerID="505748b1e10e777b66b173a6705d54ff333de5b60e6cd125a1cf81bd7167e586" Jan 31 15:06:30 crc kubenswrapper[4751]: E0131 15:06:30.380107 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"505748b1e10e777b66b173a6705d54ff333de5b60e6cd125a1cf81bd7167e586\": container with ID starting with 505748b1e10e777b66b173a6705d54ff333de5b60e6cd125a1cf81bd7167e586 not found: ID does not exist" containerID="505748b1e10e777b66b173a6705d54ff333de5b60e6cd125a1cf81bd7167e586" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.380143 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"505748b1e10e777b66b173a6705d54ff333de5b60e6cd125a1cf81bd7167e586"} err="failed to get container status \"505748b1e10e777b66b173a6705d54ff333de5b60e6cd125a1cf81bd7167e586\": rpc error: code = NotFound desc = could not find container \"505748b1e10e777b66b173a6705d54ff333de5b60e6cd125a1cf81bd7167e586\": container with ID starting with 505748b1e10e777b66b173a6705d54ff333de5b60e6cd125a1cf81bd7167e586 not found: ID does not exist" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.395534 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-confd" (OuterVolumeSpecName: "rabbitmq-confd") pod "19317a08-b18b-42c9-bdc9-394e1e06257d" (UID: "19317a08-b18b-42c9-bdc9-394e1e06257d"). InnerVolumeSpecName "rabbitmq-confd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.407911 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-confd\" (UniqueName: \"kubernetes.io/projected/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-confd\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.407955 4751 reconciler_common.go:293] "Volume detached for volume \"erlang-cookie-secret\" (UniqueName: \"kubernetes.io/secret/19317a08-b18b-42c9-bdc9-394e1e06257d-erlang-cookie-secret\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.407997 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"pvc-f145c232-830a-4841-bd1f-7c42e25cd443\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f145c232-830a-4841-bd1f-7c42e25cd443\") on node \"crc\" " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.408015 4751 reconciler_common.go:293] "Volume detached for volume \"pod-info\" (UniqueName: \"kubernetes.io/downward-api/19317a08-b18b-42c9-bdc9-394e1e06257d-pod-info\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.408028 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zrbjl\" (UniqueName: \"kubernetes.io/projected/19317a08-b18b-42c9-bdc9-394e1e06257d-kube-api-access-zrbjl\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.408039 4751 reconciler_common.go:293] "Volume detached for volume \"plugins-conf\" (UniqueName: \"kubernetes.io/configmap/19317a08-b18b-42c9-bdc9-394e1e06257d-plugins-conf\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.408050 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-erlang-cookie\" (UniqueName: \"kubernetes.io/empty-dir/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-erlang-cookie\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.408061 4751 reconciler_common.go:293] "Volume detached for volume \"rabbitmq-plugins\" (UniqueName: \"kubernetes.io/empty-dir/19317a08-b18b-42c9-bdc9-394e1e06257d-rabbitmq-plugins\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.423345 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22459bcc-672e-4390-89ae-2b5fa48ded71" path="/var/lib/kubelet/pods/22459bcc-672e-4390-89ae-2b5fa48ded71/volumes" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.425092 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9dfaa3fc-8bf7-420f-8581-4e917bf3f41c" path="/var/lib/kubelet/pods/9dfaa3fc-8bf7-420f-8581-4e917bf3f41c/volumes" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.426146 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d783dd01-73a7-4362-888a-ab84bc8739df" path="/var/lib/kubelet/pods/d783dd01-73a7-4362-888a-ab84bc8739df/volumes" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.427464 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eacc0c6c-95c4-487f-945e-4a1e3e17c508" path="/var/lib/kubelet/pods/eacc0c6c-95c4-487f-945e-4a1e3e17c508/volumes" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.428004 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f70443db-a342-4f5d-81b2-39c01f494cf8" path="/var/lib/kubelet/pods/f70443db-a342-4f5d-81b2-39c01f494cf8/volumes" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.429704 4751 csi_attacher.go:630] kubernetes.io/csi: attacher.UnmountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping UnmountDevice... Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.429855 4751 operation_generator.go:917] UnmountDevice succeeded for volume "pvc-f145c232-830a-4841-bd1f-7c42e25cd443" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f145c232-830a-4841-bd1f-7c42e25cd443") on node "crc" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.510545 4751 reconciler_common.go:293] "Volume detached for volume \"pvc-f145c232-830a-4841-bd1f-7c42e25cd443\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-f145c232-830a-4841-bd1f-7c42e25cd443\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.578208 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-index-75pvx"] Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.578491 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/swift-operator-index-75pvx" podUID="065b8624-7cdb-463c-9636-d3e980119eb7" containerName="registry-server" containerID="cri-o://d60e188fde30ce119895fe465702862991673f5195ee276a966d98efdbbb7cf3" gracePeriod=30 Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.587142 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.595460 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/rabbitmq-server-0"] Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.607973 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq"] Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.612565 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/70e8c782c05b28200f5f2de3cb5cb1e7b36c65af2b76ab17506213a5b4bbztq"] Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.721899 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.813407 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f3dfaad-d451-448b-a447-47fc7bbff0e5-webhook-cert\") pod \"9f3dfaad-d451-448b-a447-47fc7bbff0e5\" (UID: \"9f3dfaad-d451-448b-a447-47fc7bbff0e5\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.813494 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-67l2j\" (UniqueName: \"kubernetes.io/projected/9f3dfaad-d451-448b-a447-47fc7bbff0e5-kube-api-access-67l2j\") pod \"9f3dfaad-d451-448b-a447-47fc7bbff0e5\" (UID: \"9f3dfaad-d451-448b-a447-47fc7bbff0e5\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.813563 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f3dfaad-d451-448b-a447-47fc7bbff0e5-apiservice-cert\") pod \"9f3dfaad-d451-448b-a447-47fc7bbff0e5\" (UID: \"9f3dfaad-d451-448b-a447-47fc7bbff0e5\") " Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.816575 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3dfaad-d451-448b-a447-47fc7bbff0e5-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "9f3dfaad-d451-448b-a447-47fc7bbff0e5" (UID: "9f3dfaad-d451-448b-a447-47fc7bbff0e5"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.816699 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f3dfaad-d451-448b-a447-47fc7bbff0e5-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "9f3dfaad-d451-448b-a447-47fc7bbff0e5" (UID: "9f3dfaad-d451-448b-a447-47fc7bbff0e5"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.816941 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f3dfaad-d451-448b-a447-47fc7bbff0e5-kube-api-access-67l2j" (OuterVolumeSpecName: "kube-api-access-67l2j") pod "9f3dfaad-d451-448b-a447-47fc7bbff0e5" (UID: "9f3dfaad-d451-448b-a447-47fc7bbff0e5"). InnerVolumeSpecName "kube-api-access-67l2j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.856202 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.915613 4751 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/9f3dfaad-d451-448b-a447-47fc7bbff0e5-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.915648 4751 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/9f3dfaad-d451-448b-a447-47fc7bbff0e5-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.915656 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-67l2j\" (UniqueName: \"kubernetes.io/projected/9f3dfaad-d451-448b-a447-47fc7bbff0e5-kube-api-access-67l2j\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:30 crc kubenswrapper[4751]: I0131 15:06:30.976602 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-75pvx" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.016981 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-operator-scripts\") pod \"07a2906d-db30-4578-8b1e-088ca2f20ced\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.017096 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ng8cd\" (UniqueName: \"kubernetes.io/projected/07a2906d-db30-4578-8b1e-088ca2f20ced-kube-api-access-ng8cd\") pod \"07a2906d-db30-4578-8b1e-088ca2f20ced\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.017138 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-config-data-default\") pod \"07a2906d-db30-4578-8b1e-088ca2f20ced\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.017177 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-kolla-config\") pod \"07a2906d-db30-4578-8b1e-088ca2f20ced\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.017242 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mysql-db\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") pod \"07a2906d-db30-4578-8b1e-088ca2f20ced\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.017279 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/07a2906d-db30-4578-8b1e-088ca2f20ced-config-data-generated\") pod \"07a2906d-db30-4578-8b1e-088ca2f20ced\" (UID: \"07a2906d-db30-4578-8b1e-088ca2f20ced\") " Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.018516 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-config-data-default" (OuterVolumeSpecName: "config-data-default") pod "07a2906d-db30-4578-8b1e-088ca2f20ced" (UID: "07a2906d-db30-4578-8b1e-088ca2f20ced"). InnerVolumeSpecName "config-data-default". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.018575 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07a2906d-db30-4578-8b1e-088ca2f20ced-config-data-generated" (OuterVolumeSpecName: "config-data-generated") pod "07a2906d-db30-4578-8b1e-088ca2f20ced" (UID: "07a2906d-db30-4578-8b1e-088ca2f20ced"). InnerVolumeSpecName "config-data-generated". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.019673 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-operator-scripts" (OuterVolumeSpecName: "operator-scripts") pod "07a2906d-db30-4578-8b1e-088ca2f20ced" (UID: "07a2906d-db30-4578-8b1e-088ca2f20ced"). InnerVolumeSpecName "operator-scripts". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.020310 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-kolla-config" (OuterVolumeSpecName: "kolla-config") pod "07a2906d-db30-4578-8b1e-088ca2f20ced" (UID: "07a2906d-db30-4578-8b1e-088ca2f20ced"). InnerVolumeSpecName "kolla-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.024338 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07a2906d-db30-4578-8b1e-088ca2f20ced-kube-api-access-ng8cd" (OuterVolumeSpecName: "kube-api-access-ng8cd") pod "07a2906d-db30-4578-8b1e-088ca2f20ced" (UID: "07a2906d-db30-4578-8b1e-088ca2f20ced"). InnerVolumeSpecName "kube-api-access-ng8cd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.026913 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage10-crc" (OuterVolumeSpecName: "mysql-db") pod "07a2906d-db30-4578-8b1e-088ca2f20ced" (UID: "07a2906d-db30-4578-8b1e-088ca2f20ced"). InnerVolumeSpecName "local-storage10-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.119221 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpg2k\" (UniqueName: \"kubernetes.io/projected/065b8624-7cdb-463c-9636-d3e980119eb7-kube-api-access-qpg2k\") pod \"065b8624-7cdb-463c-9636-d3e980119eb7\" (UID: \"065b8624-7cdb-463c-9636-d3e980119eb7\") " Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.119758 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ng8cd\" (UniqueName: \"kubernetes.io/projected/07a2906d-db30-4578-8b1e-088ca2f20ced-kube-api-access-ng8cd\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.119784 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-default\" (UniqueName: \"kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-config-data-default\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.119820 4751 reconciler_common.go:293] "Volume detached for volume \"kolla-config\" (UniqueName: \"kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-kolla-config\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.119846 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" " Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.119861 4751 reconciler_common.go:293] "Volume detached for volume \"config-data-generated\" (UniqueName: \"kubernetes.io/empty-dir/07a2906d-db30-4578-8b1e-088ca2f20ced-config-data-generated\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.119897 4751 reconciler_common.go:293] "Volume detached for volume \"operator-scripts\" (UniqueName: \"kubernetes.io/configmap/07a2906d-db30-4578-8b1e-088ca2f20ced-operator-scripts\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.129903 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/065b8624-7cdb-463c-9636-d3e980119eb7-kube-api-access-qpg2k" (OuterVolumeSpecName: "kube-api-access-qpg2k") pod "065b8624-7cdb-463c-9636-d3e980119eb7" (UID: "065b8624-7cdb-463c-9636-d3e980119eb7"). InnerVolumeSpecName "kube-api-access-qpg2k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.137353 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage10-crc" (UniqueName: "kubernetes.io/local-volume/local-storage10-crc") on node "crc" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.221023 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage10-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage10-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.221059 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpg2k\" (UniqueName: \"kubernetes.io/projected/065b8624-7cdb-463c-9636-d3e980119eb7-kube-api-access-qpg2k\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.256569 4751 generic.go:334] "Generic (PLEG): container finished" podID="9f3dfaad-d451-448b-a447-47fc7bbff0e5" containerID="668d137892e68f6f4b2298a804a817a3b16a09cc4c85201f8a03fca82e38e755" exitCode=0 Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.256648 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" event={"ID":"9f3dfaad-d451-448b-a447-47fc7bbff0e5","Type":"ContainerDied","Data":"668d137892e68f6f4b2298a804a817a3b16a09cc4c85201f8a03fca82e38e755"} Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.256678 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" event={"ID":"9f3dfaad-d451-448b-a447-47fc7bbff0e5","Type":"ContainerDied","Data":"da3b689c07e135768fb2bc22c72ffa9872cf722e04a986707e86515f65114b9c"} Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.256700 4751 scope.go:117] "RemoveContainer" containerID="668d137892e68f6f4b2298a804a817a3b16a09cc4c85201f8a03fca82e38e755" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.256798 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-controller-manager-59595cd-9djr5" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.260564 4751 generic.go:334] "Generic (PLEG): container finished" podID="07a2906d-db30-4578-8b1e-088ca2f20ced" containerID="0e2fc16f141e03061cef807a2713d8f66a6c5d9ed59205690727526ba6a882ea" exitCode=0 Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.260691 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"07a2906d-db30-4578-8b1e-088ca2f20ced","Type":"ContainerDied","Data":"0e2fc16f141e03061cef807a2713d8f66a6c5d9ed59205690727526ba6a882ea"} Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.260766 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/openstack-galera-0" event={"ID":"07a2906d-db30-4578-8b1e-088ca2f20ced","Type":"ContainerDied","Data":"2161c6d33cfda8a5b256a8346412b18ad489372437142a6a6602a50128a7c01a"} Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.260878 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/openstack-galera-0" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.267099 4751 generic.go:334] "Generic (PLEG): container finished" podID="065b8624-7cdb-463c-9636-d3e980119eb7" containerID="d60e188fde30ce119895fe465702862991673f5195ee276a966d98efdbbb7cf3" exitCode=0 Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.267133 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-75pvx" event={"ID":"065b8624-7cdb-463c-9636-d3e980119eb7","Type":"ContainerDied","Data":"d60e188fde30ce119895fe465702862991673f5195ee276a966d98efdbbb7cf3"} Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.267151 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/swift-operator-index-75pvx" event={"ID":"065b8624-7cdb-463c-9636-d3e980119eb7","Type":"ContainerDied","Data":"5f493f4e9467a7936b5f9e1ffc78338268f76a1484833cfafa1962d6944fc1c3"} Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.267153 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/swift-operator-index-75pvx" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.278169 4751 scope.go:117] "RemoveContainer" containerID="668d137892e68f6f4b2298a804a817a3b16a09cc4c85201f8a03fca82e38e755" Jan 31 15:06:31 crc kubenswrapper[4751]: E0131 15:06:31.278606 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"668d137892e68f6f4b2298a804a817a3b16a09cc4c85201f8a03fca82e38e755\": container with ID starting with 668d137892e68f6f4b2298a804a817a3b16a09cc4c85201f8a03fca82e38e755 not found: ID does not exist" containerID="668d137892e68f6f4b2298a804a817a3b16a09cc4c85201f8a03fca82e38e755" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.278637 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"668d137892e68f6f4b2298a804a817a3b16a09cc4c85201f8a03fca82e38e755"} err="failed to get container status \"668d137892e68f6f4b2298a804a817a3b16a09cc4c85201f8a03fca82e38e755\": rpc error: code = NotFound desc = could not find container \"668d137892e68f6f4b2298a804a817a3b16a09cc4c85201f8a03fca82e38e755\": container with ID starting with 668d137892e68f6f4b2298a804a817a3b16a09cc4c85201f8a03fca82e38e755 not found: ID does not exist" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.278657 4751 scope.go:117] "RemoveContainer" containerID="0e2fc16f141e03061cef807a2713d8f66a6c5d9ed59205690727526ba6a882ea" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.296508 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.304118 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/openstack-galera-0"] Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.309350 4751 scope.go:117] "RemoveContainer" containerID="1dd59d047e2f99760bb45d01f43a08d4aeb1e5d45326b19b0123bcf023e41f96" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.321657 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-index-75pvx"] Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.329027 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/swift-operator-index-75pvx"] Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.334344 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/swift-operator-controller-manager-59595cd-9djr5"] Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.338928 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/swift-operator-controller-manager-59595cd-9djr5"] Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.345124 4751 scope.go:117] "RemoveContainer" containerID="0e2fc16f141e03061cef807a2713d8f66a6c5d9ed59205690727526ba6a882ea" Jan 31 15:06:31 crc kubenswrapper[4751]: E0131 15:06:31.345564 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0e2fc16f141e03061cef807a2713d8f66a6c5d9ed59205690727526ba6a882ea\": container with ID starting with 0e2fc16f141e03061cef807a2713d8f66a6c5d9ed59205690727526ba6a882ea not found: ID does not exist" containerID="0e2fc16f141e03061cef807a2713d8f66a6c5d9ed59205690727526ba6a882ea" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.345606 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0e2fc16f141e03061cef807a2713d8f66a6c5d9ed59205690727526ba6a882ea"} err="failed to get container status \"0e2fc16f141e03061cef807a2713d8f66a6c5d9ed59205690727526ba6a882ea\": rpc error: code = NotFound desc = could not find container \"0e2fc16f141e03061cef807a2713d8f66a6c5d9ed59205690727526ba6a882ea\": container with ID starting with 0e2fc16f141e03061cef807a2713d8f66a6c5d9ed59205690727526ba6a882ea not found: ID does not exist" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.345636 4751 scope.go:117] "RemoveContainer" containerID="1dd59d047e2f99760bb45d01f43a08d4aeb1e5d45326b19b0123bcf023e41f96" Jan 31 15:06:31 crc kubenswrapper[4751]: E0131 15:06:31.346123 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1dd59d047e2f99760bb45d01f43a08d4aeb1e5d45326b19b0123bcf023e41f96\": container with ID starting with 1dd59d047e2f99760bb45d01f43a08d4aeb1e5d45326b19b0123bcf023e41f96 not found: ID does not exist" containerID="1dd59d047e2f99760bb45d01f43a08d4aeb1e5d45326b19b0123bcf023e41f96" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.346233 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1dd59d047e2f99760bb45d01f43a08d4aeb1e5d45326b19b0123bcf023e41f96"} err="failed to get container status \"1dd59d047e2f99760bb45d01f43a08d4aeb1e5d45326b19b0123bcf023e41f96\": rpc error: code = NotFound desc = could not find container \"1dd59d047e2f99760bb45d01f43a08d4aeb1e5d45326b19b0123bcf023e41f96\": container with ID starting with 1dd59d047e2f99760bb45d01f43a08d4aeb1e5d45326b19b0123bcf023e41f96 not found: ID does not exist" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.346310 4751 scope.go:117] "RemoveContainer" containerID="d60e188fde30ce119895fe465702862991673f5195ee276a966d98efdbbb7cf3" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.365228 4751 scope.go:117] "RemoveContainer" containerID="d60e188fde30ce119895fe465702862991673f5195ee276a966d98efdbbb7cf3" Jan 31 15:06:31 crc kubenswrapper[4751]: E0131 15:06:31.365811 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d60e188fde30ce119895fe465702862991673f5195ee276a966d98efdbbb7cf3\": container with ID starting with d60e188fde30ce119895fe465702862991673f5195ee276a966d98efdbbb7cf3 not found: ID does not exist" containerID="d60e188fde30ce119895fe465702862991673f5195ee276a966d98efdbbb7cf3" Jan 31 15:06:31 crc kubenswrapper[4751]: I0131 15:06:31.365913 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d60e188fde30ce119895fe465702862991673f5195ee276a966d98efdbbb7cf3"} err="failed to get container status \"d60e188fde30ce119895fe465702862991673f5195ee276a966d98efdbbb7cf3\": rpc error: code = NotFound desc = could not find container \"d60e188fde30ce119895fe465702862991673f5195ee276a966d98efdbbb7cf3\": container with ID starting with d60e188fde30ce119895fe465702862991673f5195ee276a966d98efdbbb7cf3 not found: ID does not exist" Jan 31 15:06:32 crc kubenswrapper[4751]: I0131 15:06:32.416624 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="065b8624-7cdb-463c-9636-d3e980119eb7" path="/var/lib/kubelet/pods/065b8624-7cdb-463c-9636-d3e980119eb7/volumes" Jan 31 15:06:32 crc kubenswrapper[4751]: I0131 15:06:32.417506 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07a2906d-db30-4578-8b1e-088ca2f20ced" path="/var/lib/kubelet/pods/07a2906d-db30-4578-8b1e-088ca2f20ced/volumes" Jan 31 15:06:32 crc kubenswrapper[4751]: I0131 15:06:32.418504 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19317a08-b18b-42c9-bdc9-394e1e06257d" path="/var/lib/kubelet/pods/19317a08-b18b-42c9-bdc9-394e1e06257d/volumes" Jan 31 15:06:32 crc kubenswrapper[4751]: I0131 15:06:32.419964 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="886303a3-d05b-4551-bd03-ebc2e2aef77c" path="/var/lib/kubelet/pods/886303a3-d05b-4551-bd03-ebc2e2aef77c/volumes" Jan 31 15:06:32 crc kubenswrapper[4751]: I0131 15:06:32.420812 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f3dfaad-d451-448b-a447-47fc7bbff0e5" path="/var/lib/kubelet/pods/9f3dfaad-d451-448b-a447-47fc7bbff0e5/volumes" Jan 31 15:06:32 crc kubenswrapper[4751]: I0131 15:06:32.842545 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq"] Jan 31 15:06:32 crc kubenswrapper[4751]: I0131 15:06:32.842772 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" podUID="49ea8aae-ad89-4383-8f2f-ba35872fd605" containerName="manager" containerID="cri-o://0c37d2b2bcc47557f4028d9e251b0db237f4b56ff1d49ca666627d0449655ab2" gracePeriod=10 Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.154755 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-6bwnv"] Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.155238 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/keystone-operator-index-6bwnv" podUID="08530f42-16c5-4253-a623-2a032aeb95a7" containerName="registry-server" containerID="cri-o://3035639c5750cf779b9b57b5d0ade23abfc3c28de57f8e43e074a91f02a62e68" gracePeriod=30 Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.182904 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr"] Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.188636 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/b43f19b8e3bb8997a527070b172ae030accff9cd1a2f2b076f58d9c4ef6sfsr"] Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.299922 4751 generic.go:334] "Generic (PLEG): container finished" podID="08530f42-16c5-4253-a623-2a032aeb95a7" containerID="3035639c5750cf779b9b57b5d0ade23abfc3c28de57f8e43e074a91f02a62e68" exitCode=0 Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.299977 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-6bwnv" event={"ID":"08530f42-16c5-4253-a623-2a032aeb95a7","Type":"ContainerDied","Data":"3035639c5750cf779b9b57b5d0ade23abfc3c28de57f8e43e074a91f02a62e68"} Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.301920 4751 generic.go:334] "Generic (PLEG): container finished" podID="49ea8aae-ad89-4383-8f2f-ba35872fd605" containerID="0c37d2b2bcc47557f4028d9e251b0db237f4b56ff1d49ca666627d0449655ab2" exitCode=0 Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.301952 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" event={"ID":"49ea8aae-ad89-4383-8f2f-ba35872fd605","Type":"ContainerDied","Data":"0c37d2b2bcc47557f4028d9e251b0db237f4b56ff1d49ca666627d0449655ab2"} Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.757141 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-6bwnv" Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.844523 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.857177 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-msbp2\" (UniqueName: \"kubernetes.io/projected/08530f42-16c5-4253-a623-2a032aeb95a7-kube-api-access-msbp2\") pod \"08530f42-16c5-4253-a623-2a032aeb95a7\" (UID: \"08530f42-16c5-4253-a623-2a032aeb95a7\") " Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.862626 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/08530f42-16c5-4253-a623-2a032aeb95a7-kube-api-access-msbp2" (OuterVolumeSpecName: "kube-api-access-msbp2") pod "08530f42-16c5-4253-a623-2a032aeb95a7" (UID: "08530f42-16c5-4253-a623-2a032aeb95a7"). InnerVolumeSpecName "kube-api-access-msbp2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.961530 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49ea8aae-ad89-4383-8f2f-ba35872fd605-apiservice-cert\") pod \"49ea8aae-ad89-4383-8f2f-ba35872fd605\" (UID: \"49ea8aae-ad89-4383-8f2f-ba35872fd605\") " Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.961683 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mvdb6\" (UniqueName: \"kubernetes.io/projected/49ea8aae-ad89-4383-8f2f-ba35872fd605-kube-api-access-mvdb6\") pod \"49ea8aae-ad89-4383-8f2f-ba35872fd605\" (UID: \"49ea8aae-ad89-4383-8f2f-ba35872fd605\") " Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.961718 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49ea8aae-ad89-4383-8f2f-ba35872fd605-webhook-cert\") pod \"49ea8aae-ad89-4383-8f2f-ba35872fd605\" (UID: \"49ea8aae-ad89-4383-8f2f-ba35872fd605\") " Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.962563 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-msbp2\" (UniqueName: \"kubernetes.io/projected/08530f42-16c5-4253-a623-2a032aeb95a7-kube-api-access-msbp2\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.965217 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ea8aae-ad89-4383-8f2f-ba35872fd605-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "49ea8aae-ad89-4383-8f2f-ba35872fd605" (UID: "49ea8aae-ad89-4383-8f2f-ba35872fd605"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.967161 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49ea8aae-ad89-4383-8f2f-ba35872fd605-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "49ea8aae-ad89-4383-8f2f-ba35872fd605" (UID: "49ea8aae-ad89-4383-8f2f-ba35872fd605"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:33 crc kubenswrapper[4751]: I0131 15:06:33.968405 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ea8aae-ad89-4383-8f2f-ba35872fd605-kube-api-access-mvdb6" (OuterVolumeSpecName: "kube-api-access-mvdb6") pod "49ea8aae-ad89-4383-8f2f-ba35872fd605" (UID: "49ea8aae-ad89-4383-8f2f-ba35872fd605"). InnerVolumeSpecName "kube-api-access-mvdb6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.063722 4751 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/49ea8aae-ad89-4383-8f2f-ba35872fd605-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.063759 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mvdb6\" (UniqueName: \"kubernetes.io/projected/49ea8aae-ad89-4383-8f2f-ba35872fd605-kube-api-access-mvdb6\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.063772 4751 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/49ea8aae-ad89-4383-8f2f-ba35872fd605-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.310146 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-index-6bwnv" event={"ID":"08530f42-16c5-4253-a623-2a032aeb95a7","Type":"ContainerDied","Data":"287dcdc51fb3cdd7484c91633318b58c60ad6d8b753d031c65761b77a7b8670b"} Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.310178 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-index-6bwnv" Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.310198 4751 scope.go:117] "RemoveContainer" containerID="3035639c5750cf779b9b57b5d0ade23abfc3c28de57f8e43e074a91f02a62e68" Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.312999 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" event={"ID":"49ea8aae-ad89-4383-8f2f-ba35872fd605","Type":"ContainerDied","Data":"57b02fb0aa9fe148da3716e9376b1f52be8527b141c4d304bf8040ea0b69e451"} Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.313056 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq" Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.330932 4751 scope.go:117] "RemoveContainer" containerID="0c37d2b2bcc47557f4028d9e251b0db237f4b56ff1d49ca666627d0449655ab2" Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.350606 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq"] Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.358609 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-controller-manager-7f68887647-qvqrq"] Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.366268 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/keystone-operator-index-6bwnv"] Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.369756 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/keystone-operator-index-6bwnv"] Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.413133 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="08530f42-16c5-4253-a623-2a032aeb95a7" path="/var/lib/kubelet/pods/08530f42-16c5-4253-a623-2a032aeb95a7/volumes" Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.413741 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ea8aae-ad89-4383-8f2f-ba35872fd605" path="/var/lib/kubelet/pods/49ea8aae-ad89-4383-8f2f-ba35872fd605/volumes" Jan 31 15:06:34 crc kubenswrapper[4751]: I0131 15:06:34.414428 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="772cd794-fe9a-4ac3-8df8-e7f29edb85bf" path="/var/lib/kubelet/pods/772cd794-fe9a-4ac3-8df8-e7f29edb85bf/volumes" Jan 31 15:06:35 crc kubenswrapper[4751]: I0131 15:06:35.225323 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg"] Jan 31 15:06:35 crc kubenswrapper[4751]: I0131 15:06:35.225943 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg" podUID="3b77f113-f8c0-47b8-ad79-d1be38bf6e09" containerName="operator" containerID="cri-o://669ca17a7f872d19f207a1db8b3eeef3d4392ee464f55983153e6452773c111f" gracePeriod=10 Jan 31 15:06:35 crc kubenswrapper[4751]: I0131 15:06:35.552510 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-2wvgm"] Jan 31 15:06:35 crc kubenswrapper[4751]: I0131 15:06:35.552740 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" podUID="44c515c1-f30f-44da-8959-cfd2530b46b7" containerName="registry-server" containerID="cri-o://584fb5ac2ba8eb206c7f2718045838c65ff9fa87a8c959a34945da15defa3f15" gracePeriod=30 Jan 31 15:06:35 crc kubenswrapper[4751]: I0131 15:06:35.585576 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb"] Jan 31 15:06:35 crc kubenswrapper[4751]: I0131 15:06:35.599728 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/9704761d240e56fb98655ffd81084895b33a73ec711f4dcdef0450e590swhdb"] Jan 31 15:06:35 crc kubenswrapper[4751]: I0131 15:06:35.699409 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg" Jan 31 15:06:35 crc kubenswrapper[4751]: I0131 15:06:35.785042 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x5bkt\" (UniqueName: \"kubernetes.io/projected/3b77f113-f8c0-47b8-ad79-d1be38bf6e09-kube-api-access-x5bkt\") pod \"3b77f113-f8c0-47b8-ad79-d1be38bf6e09\" (UID: \"3b77f113-f8c0-47b8-ad79-d1be38bf6e09\") " Jan 31 15:06:35 crc kubenswrapper[4751]: I0131 15:06:35.790842 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3b77f113-f8c0-47b8-ad79-d1be38bf6e09-kube-api-access-x5bkt" (OuterVolumeSpecName: "kube-api-access-x5bkt") pod "3b77f113-f8c0-47b8-ad79-d1be38bf6e09" (UID: "3b77f113-f8c0-47b8-ad79-d1be38bf6e09"). InnerVolumeSpecName "kube-api-access-x5bkt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:35 crc kubenswrapper[4751]: I0131 15:06:35.886889 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x5bkt\" (UniqueName: \"kubernetes.io/projected/3b77f113-f8c0-47b8-ad79-d1be38bf6e09-kube-api-access-x5bkt\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:35 crc kubenswrapper[4751]: I0131 15:06:35.984536 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.088768 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzzlf\" (UniqueName: \"kubernetes.io/projected/44c515c1-f30f-44da-8959-cfd2530b46b7-kube-api-access-rzzlf\") pod \"44c515c1-f30f-44da-8959-cfd2530b46b7\" (UID: \"44c515c1-f30f-44da-8959-cfd2530b46b7\") " Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.091693 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44c515c1-f30f-44da-8959-cfd2530b46b7-kube-api-access-rzzlf" (OuterVolumeSpecName: "kube-api-access-rzzlf") pod "44c515c1-f30f-44da-8959-cfd2530b46b7" (UID: "44c515c1-f30f-44da-8959-cfd2530b46b7"). InnerVolumeSpecName "kube-api-access-rzzlf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.189918 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rzzlf\" (UniqueName: \"kubernetes.io/projected/44c515c1-f30f-44da-8959-cfd2530b46b7-kube-api-access-rzzlf\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.332932 4751 generic.go:334] "Generic (PLEG): container finished" podID="44c515c1-f30f-44da-8959-cfd2530b46b7" containerID="584fb5ac2ba8eb206c7f2718045838c65ff9fa87a8c959a34945da15defa3f15" exitCode=0 Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.332960 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.333025 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" event={"ID":"44c515c1-f30f-44da-8959-cfd2530b46b7","Type":"ContainerDied","Data":"584fb5ac2ba8eb206c7f2718045838c65ff9fa87a8c959a34945da15defa3f15"} Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.333064 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-index-2wvgm" event={"ID":"44c515c1-f30f-44da-8959-cfd2530b46b7","Type":"ContainerDied","Data":"b07bdb6979a897c42db641896c014136c4b8817fab040635c752ccba6b137d19"} Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.333111 4751 scope.go:117] "RemoveContainer" containerID="584fb5ac2ba8eb206c7f2718045838c65ff9fa87a8c959a34945da15defa3f15" Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.334569 4751 generic.go:334] "Generic (PLEG): container finished" podID="3b77f113-f8c0-47b8-ad79-d1be38bf6e09" containerID="669ca17a7f872d19f207a1db8b3eeef3d4392ee464f55983153e6452773c111f" exitCode=0 Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.334590 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg" event={"ID":"3b77f113-f8c0-47b8-ad79-d1be38bf6e09","Type":"ContainerDied","Data":"669ca17a7f872d19f207a1db8b3eeef3d4392ee464f55983153e6452773c111f"} Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.334605 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg" event={"ID":"3b77f113-f8c0-47b8-ad79-d1be38bf6e09","Type":"ContainerDied","Data":"9417f04e17815ef9de6ec5d2357c85d9f600b65c7a818fc63c494820d893f560"} Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.334631 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg" Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.367684 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-2wvgm"] Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.377101 4751 scope.go:117] "RemoveContainer" containerID="584fb5ac2ba8eb206c7f2718045838c65ff9fa87a8c959a34945da15defa3f15" Jan 31 15:06:36 crc kubenswrapper[4751]: E0131 15:06:36.377828 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584fb5ac2ba8eb206c7f2718045838c65ff9fa87a8c959a34945da15defa3f15\": container with ID starting with 584fb5ac2ba8eb206c7f2718045838c65ff9fa87a8c959a34945da15defa3f15 not found: ID does not exist" containerID="584fb5ac2ba8eb206c7f2718045838c65ff9fa87a8c959a34945da15defa3f15" Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.377874 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584fb5ac2ba8eb206c7f2718045838c65ff9fa87a8c959a34945da15defa3f15"} err="failed to get container status \"584fb5ac2ba8eb206c7f2718045838c65ff9fa87a8c959a34945da15defa3f15\": rpc error: code = NotFound desc = could not find container \"584fb5ac2ba8eb206c7f2718045838c65ff9fa87a8c959a34945da15defa3f15\": container with ID starting with 584fb5ac2ba8eb206c7f2718045838c65ff9fa87a8c959a34945da15defa3f15 not found: ID does not exist" Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.377894 4751 scope.go:117] "RemoveContainer" containerID="669ca17a7f872d19f207a1db8b3eeef3d4392ee464f55983153e6452773c111f" Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.383059 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-index-2wvgm"] Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.392946 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg"] Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.398501 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/rabbitmq-cluster-operator-779fc9694b-fnbvg"] Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.401748 4751 scope.go:117] "RemoveContainer" containerID="669ca17a7f872d19f207a1db8b3eeef3d4392ee464f55983153e6452773c111f" Jan 31 15:06:36 crc kubenswrapper[4751]: E0131 15:06:36.402251 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"669ca17a7f872d19f207a1db8b3eeef3d4392ee464f55983153e6452773c111f\": container with ID starting with 669ca17a7f872d19f207a1db8b3eeef3d4392ee464f55983153e6452773c111f not found: ID does not exist" containerID="669ca17a7f872d19f207a1db8b3eeef3d4392ee464f55983153e6452773c111f" Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.402276 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"669ca17a7f872d19f207a1db8b3eeef3d4392ee464f55983153e6452773c111f"} err="failed to get container status \"669ca17a7f872d19f207a1db8b3eeef3d4392ee464f55983153e6452773c111f\": rpc error: code = NotFound desc = could not find container \"669ca17a7f872d19f207a1db8b3eeef3d4392ee464f55983153e6452773c111f\": container with ID starting with 669ca17a7f872d19f207a1db8b3eeef3d4392ee464f55983153e6452773c111f not found: ID does not exist" Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.417706 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3b77f113-f8c0-47b8-ad79-d1be38bf6e09" path="/var/lib/kubelet/pods/3b77f113-f8c0-47b8-ad79-d1be38bf6e09/volumes" Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.424372 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44c515c1-f30f-44da-8959-cfd2530b46b7" path="/var/lib/kubelet/pods/44c515c1-f30f-44da-8959-cfd2530b46b7/volumes" Jan 31 15:06:36 crc kubenswrapper[4751]: I0131 15:06:36.425718 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a525382f-29ee-4393-9e5b-1b3e989a1bc3" path="/var/lib/kubelet/pods/a525382f-29ee-4393-9e5b-1b3e989a1bc3/volumes" Jan 31 15:06:37 crc kubenswrapper[4751]: I0131 15:06:37.505158 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl"] Jan 31 15:06:37 crc kubenswrapper[4751]: I0131 15:06:37.505708 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" podUID="6578d137-d120-43b2-99e3-71d4f6525d6c" containerName="manager" containerID="cri-o://5dbebc52897c07a2e7dfa38ad3cf5873d3f7ef9969f655b93dd162236a7cbaa8" gracePeriod=10 Jan 31 15:06:37 crc kubenswrapper[4751]: I0131 15:06:37.760503 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-5tz82"] Jan 31 15:06:37 crc kubenswrapper[4751]: I0131 15:06:37.760729 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/infra-operator-index-5tz82" podUID="15539f33-874c-45ae-8ee2-7f821c54b267" containerName="registry-server" containerID="cri-o://2880fdfb3f748e15e74bf089fd52f14043b007613c5d6a9fb47038e2c465f42a" gracePeriod=30 Jan 31 15:06:37 crc kubenswrapper[4751]: I0131 15:06:37.792494 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6"] Jan 31 15:06:37 crc kubenswrapper[4751]: I0131 15:06:37.796898 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/d7c3b59ed6c2e571e21460d743e5fcd0c5f76cb7c446e474a3d05f7576w2vw6"] Jan 31 15:06:37 crc kubenswrapper[4751]: I0131 15:06:37.974360 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.115417 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6578d137-d120-43b2-99e3-71d4f6525d6c-apiservice-cert\") pod \"6578d137-d120-43b2-99e3-71d4f6525d6c\" (UID: \"6578d137-d120-43b2-99e3-71d4f6525d6c\") " Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.115490 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6578d137-d120-43b2-99e3-71d4f6525d6c-webhook-cert\") pod \"6578d137-d120-43b2-99e3-71d4f6525d6c\" (UID: \"6578d137-d120-43b2-99e3-71d4f6525d6c\") " Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.115575 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7hq5r\" (UniqueName: \"kubernetes.io/projected/6578d137-d120-43b2-99e3-71d4f6525d6c-kube-api-access-7hq5r\") pod \"6578d137-d120-43b2-99e3-71d4f6525d6c\" (UID: \"6578d137-d120-43b2-99e3-71d4f6525d6c\") " Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.121315 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6578d137-d120-43b2-99e3-71d4f6525d6c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "6578d137-d120-43b2-99e3-71d4f6525d6c" (UID: "6578d137-d120-43b2-99e3-71d4f6525d6c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.122177 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6578d137-d120-43b2-99e3-71d4f6525d6c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "6578d137-d120-43b2-99e3-71d4f6525d6c" (UID: "6578d137-d120-43b2-99e3-71d4f6525d6c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.128862 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6578d137-d120-43b2-99e3-71d4f6525d6c-kube-api-access-7hq5r" (OuterVolumeSpecName: "kube-api-access-7hq5r") pod "6578d137-d120-43b2-99e3-71d4f6525d6c" (UID: "6578d137-d120-43b2-99e3-71d4f6525d6c"). InnerVolumeSpecName "kube-api-access-7hq5r". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.159826 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-5tz82" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.216311 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rmd6m\" (UniqueName: \"kubernetes.io/projected/15539f33-874c-45ae-8ee2-7f821c54b267-kube-api-access-rmd6m\") pod \"15539f33-874c-45ae-8ee2-7f821c54b267\" (UID: \"15539f33-874c-45ae-8ee2-7f821c54b267\") " Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.216555 4751 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/6578d137-d120-43b2-99e3-71d4f6525d6c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.216575 4751 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/6578d137-d120-43b2-99e3-71d4f6525d6c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.216583 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7hq5r\" (UniqueName: \"kubernetes.io/projected/6578d137-d120-43b2-99e3-71d4f6525d6c-kube-api-access-7hq5r\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.220488 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15539f33-874c-45ae-8ee2-7f821c54b267-kube-api-access-rmd6m" (OuterVolumeSpecName: "kube-api-access-rmd6m") pod "15539f33-874c-45ae-8ee2-7f821c54b267" (UID: "15539f33-874c-45ae-8ee2-7f821c54b267"). InnerVolumeSpecName "kube-api-access-rmd6m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.318145 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rmd6m\" (UniqueName: \"kubernetes.io/projected/15539f33-874c-45ae-8ee2-7f821c54b267-kube-api-access-rmd6m\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.354112 4751 generic.go:334] "Generic (PLEG): container finished" podID="6578d137-d120-43b2-99e3-71d4f6525d6c" containerID="5dbebc52897c07a2e7dfa38ad3cf5873d3f7ef9969f655b93dd162236a7cbaa8" exitCode=0 Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.354134 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.354215 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" event={"ID":"6578d137-d120-43b2-99e3-71d4f6525d6c","Type":"ContainerDied","Data":"5dbebc52897c07a2e7dfa38ad3cf5873d3f7ef9969f655b93dd162236a7cbaa8"} Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.354267 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl" event={"ID":"6578d137-d120-43b2-99e3-71d4f6525d6c","Type":"ContainerDied","Data":"fd210a97bb4f47dccbcdbfba3a6c2101ade7c45f9468d34991d8307e718c3b16"} Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.354288 4751 scope.go:117] "RemoveContainer" containerID="5dbebc52897c07a2e7dfa38ad3cf5873d3f7ef9969f655b93dd162236a7cbaa8" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.356708 4751 generic.go:334] "Generic (PLEG): container finished" podID="15539f33-874c-45ae-8ee2-7f821c54b267" containerID="2880fdfb3f748e15e74bf089fd52f14043b007613c5d6a9fb47038e2c465f42a" exitCode=0 Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.356754 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/infra-operator-index-5tz82" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.356792 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-5tz82" event={"ID":"15539f33-874c-45ae-8ee2-7f821c54b267","Type":"ContainerDied","Data":"2880fdfb3f748e15e74bf089fd52f14043b007613c5d6a9fb47038e2c465f42a"} Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.357151 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/infra-operator-index-5tz82" event={"ID":"15539f33-874c-45ae-8ee2-7f821c54b267","Type":"ContainerDied","Data":"a32321b4d51d551ed7ea834004f3d66d0ba16e8c6d1b16cfe9fefade795fabc7"} Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.382312 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl"] Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.387598 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-controller-manager-57f67fdff5-45pkl"] Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.399366 4751 scope.go:117] "RemoveContainer" containerID="5dbebc52897c07a2e7dfa38ad3cf5873d3f7ef9969f655b93dd162236a7cbaa8" Jan 31 15:06:38 crc kubenswrapper[4751]: E0131 15:06:38.400236 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dbebc52897c07a2e7dfa38ad3cf5873d3f7ef9969f655b93dd162236a7cbaa8\": container with ID starting with 5dbebc52897c07a2e7dfa38ad3cf5873d3f7ef9969f655b93dd162236a7cbaa8 not found: ID does not exist" containerID="5dbebc52897c07a2e7dfa38ad3cf5873d3f7ef9969f655b93dd162236a7cbaa8" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.400281 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dbebc52897c07a2e7dfa38ad3cf5873d3f7ef9969f655b93dd162236a7cbaa8"} err="failed to get container status \"5dbebc52897c07a2e7dfa38ad3cf5873d3f7ef9969f655b93dd162236a7cbaa8\": rpc error: code = NotFound desc = could not find container \"5dbebc52897c07a2e7dfa38ad3cf5873d3f7ef9969f655b93dd162236a7cbaa8\": container with ID starting with 5dbebc52897c07a2e7dfa38ad3cf5873d3f7ef9969f655b93dd162236a7cbaa8 not found: ID does not exist" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.400309 4751 scope.go:117] "RemoveContainer" containerID="2880fdfb3f748e15e74bf089fd52f14043b007613c5d6a9fb47038e2c465f42a" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.403336 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/infra-operator-index-5tz82"] Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.412278 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="29a3b16f-f39d-413a-b623-3ac15aba50cf" path="/var/lib/kubelet/pods/29a3b16f-f39d-413a-b623-3ac15aba50cf/volumes" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.412864 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6578d137-d120-43b2-99e3-71d4f6525d6c" path="/var/lib/kubelet/pods/6578d137-d120-43b2-99e3-71d4f6525d6c/volumes" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.413275 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/infra-operator-index-5tz82"] Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.421803 4751 scope.go:117] "RemoveContainer" containerID="2880fdfb3f748e15e74bf089fd52f14043b007613c5d6a9fb47038e2c465f42a" Jan 31 15:06:38 crc kubenswrapper[4751]: E0131 15:06:38.422311 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2880fdfb3f748e15e74bf089fd52f14043b007613c5d6a9fb47038e2c465f42a\": container with ID starting with 2880fdfb3f748e15e74bf089fd52f14043b007613c5d6a9fb47038e2c465f42a not found: ID does not exist" containerID="2880fdfb3f748e15e74bf089fd52f14043b007613c5d6a9fb47038e2c465f42a" Jan 31 15:06:38 crc kubenswrapper[4751]: I0131 15:06:38.422351 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2880fdfb3f748e15e74bf089fd52f14043b007613c5d6a9fb47038e2c465f42a"} err="failed to get container status \"2880fdfb3f748e15e74bf089fd52f14043b007613c5d6a9fb47038e2c465f42a\": rpc error: code = NotFound desc = could not find container \"2880fdfb3f748e15e74bf089fd52f14043b007613c5d6a9fb47038e2c465f42a\": container with ID starting with 2880fdfb3f748e15e74bf089fd52f14043b007613c5d6a9fb47038e2c465f42a not found: ID does not exist" Jan 31 15:06:39 crc kubenswrapper[4751]: I0131 15:06:39.679669 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn"] Jan 31 15:06:39 crc kubenswrapper[4751]: I0131 15:06:39.680261 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" podUID="14df28b7-d7cb-466e-aa07-69e320d71620" containerName="manager" containerID="cri-o://335b59bfcc7957562303125aed6f69b84b76a35a78ef760e49817204c373ec41" gracePeriod=10 Jan 31 15:06:39 crc kubenswrapper[4751]: I0131 15:06:39.974581 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-lpshr"] Jan 31 15:06:39 crc kubenswrapper[4751]: I0131 15:06:39.974788 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openstack-operators/mariadb-operator-index-lpshr" podUID="11fab5ff-3041-45d3-8aab-29e25ed8c6ae" containerName="registry-server" containerID="cri-o://ee8f26396384b77a167d32d6b25c5bda6d0d4b830d367daeecc1c7c591a2ef63" gracePeriod=30 Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.003204 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q"] Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.007738 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/f5f7435db1a968bc2e4b919cf4f5a8f6719d9ac995e6b095f5b2e84f40b4j2q"] Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.111887 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.241343 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14df28b7-d7cb-466e-aa07-69e320d71620-webhook-cert\") pod \"14df28b7-d7cb-466e-aa07-69e320d71620\" (UID: \"14df28b7-d7cb-466e-aa07-69e320d71620\") " Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.241457 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5zcjm\" (UniqueName: \"kubernetes.io/projected/14df28b7-d7cb-466e-aa07-69e320d71620-kube-api-access-5zcjm\") pod \"14df28b7-d7cb-466e-aa07-69e320d71620\" (UID: \"14df28b7-d7cb-466e-aa07-69e320d71620\") " Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.241503 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14df28b7-d7cb-466e-aa07-69e320d71620-apiservice-cert\") pod \"14df28b7-d7cb-466e-aa07-69e320d71620\" (UID: \"14df28b7-d7cb-466e-aa07-69e320d71620\") " Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.246776 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/14df28b7-d7cb-466e-aa07-69e320d71620-kube-api-access-5zcjm" (OuterVolumeSpecName: "kube-api-access-5zcjm") pod "14df28b7-d7cb-466e-aa07-69e320d71620" (UID: "14df28b7-d7cb-466e-aa07-69e320d71620"). InnerVolumeSpecName "kube-api-access-5zcjm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.256200 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14df28b7-d7cb-466e-aa07-69e320d71620-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "14df28b7-d7cb-466e-aa07-69e320d71620" (UID: "14df28b7-d7cb-466e-aa07-69e320d71620"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.268189 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/14df28b7-d7cb-466e-aa07-69e320d71620-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "14df28b7-d7cb-466e-aa07-69e320d71620" (UID: "14df28b7-d7cb-466e-aa07-69e320d71620"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.342773 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5zcjm\" (UniqueName: \"kubernetes.io/projected/14df28b7-d7cb-466e-aa07-69e320d71620-kube-api-access-5zcjm\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.342812 4751 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/14df28b7-d7cb-466e-aa07-69e320d71620-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.342822 4751 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/14df28b7-d7cb-466e-aa07-69e320d71620-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.355182 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-lpshr" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.386771 4751 generic.go:334] "Generic (PLEG): container finished" podID="11fab5ff-3041-45d3-8aab-29e25ed8c6ae" containerID="ee8f26396384b77a167d32d6b25c5bda6d0d4b830d367daeecc1c7c591a2ef63" exitCode=0 Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.386840 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-index-lpshr" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.386863 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-lpshr" event={"ID":"11fab5ff-3041-45d3-8aab-29e25ed8c6ae","Type":"ContainerDied","Data":"ee8f26396384b77a167d32d6b25c5bda6d0d4b830d367daeecc1c7c591a2ef63"} Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.386898 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-index-lpshr" event={"ID":"11fab5ff-3041-45d3-8aab-29e25ed8c6ae","Type":"ContainerDied","Data":"713674d4d326d8545cf66e50aee47bded08afa8e12a04a763f8540e3552c31dd"} Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.386922 4751 scope.go:117] "RemoveContainer" containerID="ee8f26396384b77a167d32d6b25c5bda6d0d4b830d367daeecc1c7c591a2ef63" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.390286 4751 generic.go:334] "Generic (PLEG): container finished" podID="14df28b7-d7cb-466e-aa07-69e320d71620" containerID="335b59bfcc7957562303125aed6f69b84b76a35a78ef760e49817204c373ec41" exitCode=0 Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.390330 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" event={"ID":"14df28b7-d7cb-466e-aa07-69e320d71620","Type":"ContainerDied","Data":"335b59bfcc7957562303125aed6f69b84b76a35a78ef760e49817204c373ec41"} Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.390356 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" event={"ID":"14df28b7-d7cb-466e-aa07-69e320d71620","Type":"ContainerDied","Data":"8d7ddc4e6b1f882339c27c9bee06d6abc3c29498935b356f92bf581f66149e68"} Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.390416 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.409634 4751 scope.go:117] "RemoveContainer" containerID="ee8f26396384b77a167d32d6b25c5bda6d0d4b830d367daeecc1c7c591a2ef63" Jan 31 15:06:40 crc kubenswrapper[4751]: E0131 15:06:40.410626 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ee8f26396384b77a167d32d6b25c5bda6d0d4b830d367daeecc1c7c591a2ef63\": container with ID starting with ee8f26396384b77a167d32d6b25c5bda6d0d4b830d367daeecc1c7c591a2ef63 not found: ID does not exist" containerID="ee8f26396384b77a167d32d6b25c5bda6d0d4b830d367daeecc1c7c591a2ef63" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.410687 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ee8f26396384b77a167d32d6b25c5bda6d0d4b830d367daeecc1c7c591a2ef63"} err="failed to get container status \"ee8f26396384b77a167d32d6b25c5bda6d0d4b830d367daeecc1c7c591a2ef63\": rpc error: code = NotFound desc = could not find container \"ee8f26396384b77a167d32d6b25c5bda6d0d4b830d367daeecc1c7c591a2ef63\": container with ID starting with ee8f26396384b77a167d32d6b25c5bda6d0d4b830d367daeecc1c7c591a2ef63 not found: ID does not exist" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.410722 4751 scope.go:117] "RemoveContainer" containerID="335b59bfcc7957562303125aed6f69b84b76a35a78ef760e49817204c373ec41" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.415399 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15539f33-874c-45ae-8ee2-7f821c54b267" path="/var/lib/kubelet/pods/15539f33-874c-45ae-8ee2-7f821c54b267/volumes" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.416253 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="667a6cec-bf73-4340-9be6-f4bc10182004" path="/var/lib/kubelet/pods/667a6cec-bf73-4340-9be6-f4bc10182004/volumes" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.430588 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn"] Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.433653 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-controller-manager-65848b4486-qb6hn"] Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.443433 4751 scope.go:117] "RemoveContainer" containerID="335b59bfcc7957562303125aed6f69b84b76a35a78ef760e49817204c373ec41" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.443729 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hltq9\" (UniqueName: \"kubernetes.io/projected/11fab5ff-3041-45d3-8aab-29e25ed8c6ae-kube-api-access-hltq9\") pod \"11fab5ff-3041-45d3-8aab-29e25ed8c6ae\" (UID: \"11fab5ff-3041-45d3-8aab-29e25ed8c6ae\") " Jan 31 15:06:40 crc kubenswrapper[4751]: E0131 15:06:40.446219 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"335b59bfcc7957562303125aed6f69b84b76a35a78ef760e49817204c373ec41\": container with ID starting with 335b59bfcc7957562303125aed6f69b84b76a35a78ef760e49817204c373ec41 not found: ID does not exist" containerID="335b59bfcc7957562303125aed6f69b84b76a35a78ef760e49817204c373ec41" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.446252 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"335b59bfcc7957562303125aed6f69b84b76a35a78ef760e49817204c373ec41"} err="failed to get container status \"335b59bfcc7957562303125aed6f69b84b76a35a78ef760e49817204c373ec41\": rpc error: code = NotFound desc = could not find container \"335b59bfcc7957562303125aed6f69b84b76a35a78ef760e49817204c373ec41\": container with ID starting with 335b59bfcc7957562303125aed6f69b84b76a35a78ef760e49817204c373ec41 not found: ID does not exist" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.447249 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/11fab5ff-3041-45d3-8aab-29e25ed8c6ae-kube-api-access-hltq9" (OuterVolumeSpecName: "kube-api-access-hltq9") pod "11fab5ff-3041-45d3-8aab-29e25ed8c6ae" (UID: "11fab5ff-3041-45d3-8aab-29e25ed8c6ae"). InnerVolumeSpecName "kube-api-access-hltq9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.544847 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hltq9\" (UniqueName: \"kubernetes.io/projected/11fab5ff-3041-45d3-8aab-29e25ed8c6ae-kube-api-access-hltq9\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.716120 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openstack-operators/mariadb-operator-index-lpshr"] Jan 31 15:06:40 crc kubenswrapper[4751]: I0131 15:06:40.723276 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openstack-operators/mariadb-operator-index-lpshr"] Jan 31 15:06:42 crc kubenswrapper[4751]: I0131 15:06:42.412354 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="11fab5ff-3041-45d3-8aab-29e25ed8c6ae" path="/var/lib/kubelet/pods/11fab5ff-3041-45d3-8aab-29e25ed8c6ae/volumes" Jan 31 15:06:42 crc kubenswrapper[4751]: I0131 15:06:42.413125 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="14df28b7-d7cb-466e-aa07-69e320d71620" path="/var/lib/kubelet/pods/14df28b7-d7cb-466e-aa07-69e320d71620/volumes" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.178522 4751 scope.go:117] "RemoveContainer" containerID="ed9ea3bb8f54f1c0a1685efd692fcb4334fbd2ea55432c305b974a3bf1ca584b" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.204249 4751 scope.go:117] "RemoveContainer" containerID="eec75dcec16927bdd78c685c8995e59bfecf459a9739faabb410481b5046b1fb" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.230120 4751 scope.go:117] "RemoveContainer" containerID="ecd0273950524364ff0a405d7ba30af3f5ab2065b0d4986c88176cf55c6327d6" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.252523 4751 scope.go:117] "RemoveContainer" containerID="914fa7bc157f85f90159778e4a352984883804f817b8f2353eb69568b5c31c21" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.271541 4751 scope.go:117] "RemoveContainer" containerID="5d457b880e70ab7d7bdcd88eb562c916f03e6b62d577ebf9192cc4974cd177f7" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.311218 4751 scope.go:117] "RemoveContainer" containerID="8f2f8355ecce67c5c0aa186fe2a2c3a5d75143a19a9cc7d982cad7e44dc2d94f" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.331853 4751 scope.go:117] "RemoveContainer" containerID="8784247046f02ab2d8c0a52ce8233e64d23a7cd286c98e45a4c36115e6daf6d3" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.346019 4751 scope.go:117] "RemoveContainer" containerID="3bbca91afaf0c02d15eadfd14c9b7b21724ed7ad9f88766a7c7a0c41fcf118a3" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.359308 4751 scope.go:117] "RemoveContainer" containerID="1985ee06fa1b0e5b47503229ec369a787fff12bff875d4cad0ea6a84e35d2169" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.373612 4751 scope.go:117] "RemoveContainer" containerID="522600cf4dfb7197c49e6a2fb7abef1d560bd673fb2da9388c38a54462595db0" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.414010 4751 scope.go:117] "RemoveContainer" containerID="d7982a0dd9c095e8b3eb11a8ff02587d379ddd34791962ab93f48b60a33bec98" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.432111 4751 scope.go:117] "RemoveContainer" containerID="be0ffdbf0de55d407a928a375e5355c5f5a9cda93c0fc7ee45e3254cddeefdc8" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.481330 4751 scope.go:117] "RemoveContainer" containerID="f84e5f08594d3f72dc6ce544065026534e30bfc6f05c4074d6d95900baad7f74" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.497841 4751 scope.go:117] "RemoveContainer" containerID="48b5fe15e6b5d08f52dd98326462e391e5fafbd3bd396d34d3c7b444efffa146" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.521733 4751 scope.go:117] "RemoveContainer" containerID="f8a8825d481236aeb9aa96c02aca48495f3689b5e59d7cbdc781d2a43a293e1d" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.538574 4751 scope.go:117] "RemoveContainer" containerID="8c99859db003b8960447da601e95711f7b0d1554d7ee22f9d6cb9490f3263093" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.562322 4751 scope.go:117] "RemoveContainer" containerID="24848de7678f7cd58f76b4f47400dce420906e54dfe8d1ef4c220211c4bbb57e" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.580672 4751 scope.go:117] "RemoveContainer" containerID="30a855aaf2538d16f15d520cfdce2fe3cf7008190e9478d912986cc8f0f389d2" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.595975 4751 scope.go:117] "RemoveContainer" containerID="fdff4dbce192cc3ad36befaed2781dd7252206ed773249a695ca5b7f5682312b" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.658092 4751 scope.go:117] "RemoveContainer" containerID="ac644719d568c7b156ce9cbb766a2f8c70e69f2f94ca1bad0488a7736c5cd6c9" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.672881 4751 scope.go:117] "RemoveContainer" containerID="acab140e6ca6aa95c6844fc3952eecbc060037dc14e3f1b6a536e962fd34fb0c" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.688717 4751 scope.go:117] "RemoveContainer" containerID="b26b741fdf290763b6328eb1a8c5b1a7f048f2aecba802a031d85386bf813c0e" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.704791 4751 scope.go:117] "RemoveContainer" containerID="48586bec329cecb88f31df9f626d414b524092e8f0898f91d2fb0a6740d113ca" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.728359 4751 scope.go:117] "RemoveContainer" containerID="457ea80a5f749ea606e6892b07ad8e22c7b832800f0f223bc54849035a17270d" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.747487 4751 scope.go:117] "RemoveContainer" containerID="c816a8193fdacfa313315863400dd00d03b42ba5e5ce2524c35985ffd3fa845b" Jan 31 15:06:46 crc kubenswrapper[4751]: I0131 15:06:46.766480 4751 scope.go:117] "RemoveContainer" containerID="1220529350d17a7bb750446818ec08ebb9bc079afd6ac80866f7fe1abd4f1db3" Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.505043 4751 generic.go:334] "Generic (PLEG): container finished" podID="440e5809-7b49-4b21-99dd-668468c84017" containerID="ae11b6c0a7f7893c0ba728593c9e1b6db0bc399ae9c55df1f1023d422fc9333c" exitCode=137 Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.505573 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"ae11b6c0a7f7893c0ba728593c9e1b6db0bc399ae9c55df1f1023d422fc9333c"} Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.723229 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.783654 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"swift\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") pod \"440e5809-7b49-4b21-99dd-668468c84017\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.783751 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/440e5809-7b49-4b21-99dd-668468c84017-lock\") pod \"440e5809-7b49-4b21-99dd-668468c84017\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.783928 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kqvvn\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-kube-api-access-kqvvn\") pod \"440e5809-7b49-4b21-99dd-668468c84017\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.783965 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/440e5809-7b49-4b21-99dd-668468c84017-cache\") pod \"440e5809-7b49-4b21-99dd-668468c84017\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.784050 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift\") pod \"440e5809-7b49-4b21-99dd-668468c84017\" (UID: \"440e5809-7b49-4b21-99dd-668468c84017\") " Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.784265 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/440e5809-7b49-4b21-99dd-668468c84017-lock" (OuterVolumeSpecName: "lock") pod "440e5809-7b49-4b21-99dd-668468c84017" (UID: "440e5809-7b49-4b21-99dd-668468c84017"). InnerVolumeSpecName "lock". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.784607 4751 reconciler_common.go:293] "Volume detached for volume \"lock\" (UniqueName: \"kubernetes.io/empty-dir/440e5809-7b49-4b21-99dd-668468c84017-lock\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.786800 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/440e5809-7b49-4b21-99dd-668468c84017-cache" (OuterVolumeSpecName: "cache") pod "440e5809-7b49-4b21-99dd-668468c84017" (UID: "440e5809-7b49-4b21-99dd-668468c84017"). InnerVolumeSpecName "cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.789249 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/local-volume/local-storage14-crc" (OuterVolumeSpecName: "swift") pod "440e5809-7b49-4b21-99dd-668468c84017" (UID: "440e5809-7b49-4b21-99dd-668468c84017"). InnerVolumeSpecName "local-storage14-crc". PluginName "kubernetes.io/local-volume", VolumeGidValue "" Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.789257 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-kube-api-access-kqvvn" (OuterVolumeSpecName: "kube-api-access-kqvvn") pod "440e5809-7b49-4b21-99dd-668468c84017" (UID: "440e5809-7b49-4b21-99dd-668468c84017"). InnerVolumeSpecName "kube-api-access-kqvvn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.789276 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift" (OuterVolumeSpecName: "etc-swift") pod "440e5809-7b49-4b21-99dd-668468c84017" (UID: "440e5809-7b49-4b21-99dd-668468c84017"). InnerVolumeSpecName "etc-swift". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.885535 4751 reconciler_common.go:293] "Volume detached for volume \"etc-swift\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-etc-swift\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.885592 4751 reconciler_common.go:286] "operationExecutor.UnmountDevice started for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" " Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.885606 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kqvvn\" (UniqueName: \"kubernetes.io/projected/440e5809-7b49-4b21-99dd-668468c84017-kube-api-access-kqvvn\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.885619 4751 reconciler_common.go:293] "Volume detached for volume \"cache\" (UniqueName: \"kubernetes.io/empty-dir/440e5809-7b49-4b21-99dd-668468c84017-cache\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.895179 4751 operation_generator.go:917] UnmountDevice succeeded for volume "local-storage14-crc" (UniqueName: "kubernetes.io/local-volume/local-storage14-crc") on node "crc" Jan 31 15:06:50 crc kubenswrapper[4751]: I0131 15:06:50.986787 4751 reconciler_common.go:293] "Volume detached for volume \"local-storage14-crc\" (UniqueName: \"kubernetes.io/local-volume/local-storage14-crc\") on node \"crc\" DevicePath \"\"" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.527480 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="glance-kuttl-tests/swift-storage-0" event={"ID":"440e5809-7b49-4b21-99dd-668468c84017","Type":"ContainerDied","Data":"7955d37d9d1be24fa8d9a015aa2ea953036cee2a0334d1dbf39fdbe1dcef40e5"} Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.527584 4751 scope.go:117] "RemoveContainer" containerID="ae11b6c0a7f7893c0ba728593c9e1b6db0bc399ae9c55df1f1023d422fc9333c" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.527614 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="glance-kuttl-tests/swift-storage-0" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.558042 4751 scope.go:117] "RemoveContainer" containerID="519bd8155f30918b172e24832e84310378bd7ea10e796377a992dd3fe9e7276d" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.578784 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.585661 4751 scope.go:117] "RemoveContainer" containerID="950232b5b660c70b9100e81003ff993443f745f40d7da6ba8dc037822059cb8e" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.590312 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["glance-kuttl-tests/swift-storage-0"] Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.604574 4751 scope.go:117] "RemoveContainer" containerID="71ca1416bdc095b268ec385a4ebcd269b729c80c3aee7f832db2892f4fe6e78a" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.628312 4751 scope.go:117] "RemoveContainer" containerID="1e2003fe4d2366b583ebedf393e2492c910be0ebf3f2652f5a15b1e8c78961df" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.649961 4751 scope.go:117] "RemoveContainer" containerID="03b25054db738f38056ec8af2822c9203e252f1a4f95be8c4ab8c1c34de3455c" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.671370 4751 scope.go:117] "RemoveContainer" containerID="1f74cf8c2ce97cd17f509447e4c986197d8af0e8b2f40e7c6a07653c81e66d3b" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.700931 4751 scope.go:117] "RemoveContainer" containerID="400722d3dac6cd5b0b727b3e599b127bb527981160049f2561a32e7ada14affd" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.726116 4751 scope.go:117] "RemoveContainer" containerID="34a87b0cfca857f6a2c07d4713531103b7df75f0fdc3e2be299ecaf554d5d9db" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.752365 4751 scope.go:117] "RemoveContainer" containerID="03c86cbbc819872662746f2a8384c7c50f07b481c42b5f3d39e0b1e87c7b0557" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.782755 4751 scope.go:117] "RemoveContainer" containerID="3b4375e902d16ea731761694aa85354dcfcda568f68f1d4210b06b07c701f380" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.803922 4751 scope.go:117] "RemoveContainer" containerID="a4e14596c5c3a7af2ea9e82736c916fc73b8fcbf27a523b8fe47f9a8e69b1bc2" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.823358 4751 scope.go:117] "RemoveContainer" containerID="461a1aaa8bc72705195647c97b28e111484e900c69e9a4da07e510a6c451ed4c" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.842945 4751 scope.go:117] "RemoveContainer" containerID="d0ab6cd06ea2abbd171a5345dc579495df175d9d8a52b30a0139e24e65e43616" Jan 31 15:06:51 crc kubenswrapper[4751]: I0131 15:06:51.857646 4751 scope.go:117] "RemoveContainer" containerID="d93f0c8cc4f4e310c9d207351f924f281c14e44b511b3d4a8f51fed27dbeed8f" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.419187 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="440e5809-7b49-4b21-99dd-668468c84017" path="/var/lib/kubelet/pods/440e5809-7b49-4b21-99dd-668468c84017/volumes" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.560610 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rzlpf/must-gather-k47wq"] Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.560969 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6578d137-d120-43b2-99e3-71d4f6525d6c" containerName="manager" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.560985 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="6578d137-d120-43b2-99e3-71d4f6525d6c" containerName="manager" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.560998 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="08530f42-16c5-4253-a623-2a032aeb95a7" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561008 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="08530f42-16c5-4253-a623-2a032aeb95a7" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561022 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-replicator" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561047 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-replicator" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561059 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="949efaf4-a5db-405d-9d40-c44d525c603c" containerName="mariadb-account-delete" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561092 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="949efaf4-a5db-405d-9d40-c44d525c603c" containerName="mariadb-account-delete" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561109 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="14df28b7-d7cb-466e-aa07-69e320d71620" containerName="manager" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561118 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="14df28b7-d7cb-466e-aa07-69e320d71620" containerName="manager" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561132 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9dfaa3fc-8bf7-420f-8581-4e917bf3f41c" containerName="memcached" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561141 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="9dfaa3fc-8bf7-420f-8581-4e917bf3f41c" containerName="memcached" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561172 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19317a08-b18b-42c9-bdc9-394e1e06257d" containerName="rabbitmq" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561182 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="19317a08-b18b-42c9-bdc9-394e1e06257d" containerName="rabbitmq" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561196 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a2906d-db30-4578-8b1e-088ca2f20ced" containerName="mysql-bootstrap" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561203 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a2906d-db30-4578-8b1e-088ca2f20ced" containerName="mysql-bootstrap" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561214 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d783dd01-73a7-4362-888a-ab84bc8739df" containerName="extract-content" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561222 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d783dd01-73a7-4362-888a-ab84bc8739df" containerName="extract-content" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561253 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-updater" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561262 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-updater" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561270 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fcd9bac-c0cb-4de4-b630-0db07f110da7" containerName="galera" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561280 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fcd9bac-c0cb-4de4-b630-0db07f110da7" containerName="galera" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561295 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f70443db-a342-4f5d-81b2-39c01f494cf8" containerName="manager" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561302 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f70443db-a342-4f5d-81b2-39c01f494cf8" containerName="manager" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561332 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-replicator" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561341 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-replicator" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561352 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3fcd9bac-c0cb-4de4-b630-0db07f110da7" containerName="mysql-bootstrap" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561360 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3fcd9bac-c0cb-4de4-b630-0db07f110da7" containerName="mysql-bootstrap" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561372 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="49ea8aae-ad89-4383-8f2f-ba35872fd605" containerName="manager" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561380 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="49ea8aae-ad89-4383-8f2f-ba35872fd605" containerName="manager" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561408 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-expirer" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561417 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-expirer" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561427 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07a2906d-db30-4578-8b1e-088ca2f20ced" containerName="galera" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561436 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="07a2906d-db30-4578-8b1e-088ca2f20ced" containerName="galera" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561446 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d783dd01-73a7-4362-888a-ab84bc8739df" containerName="extract-utilities" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561455 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d783dd01-73a7-4362-888a-ab84bc8739df" containerName="extract-utilities" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561487 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="44c515c1-f30f-44da-8959-cfd2530b46b7" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561497 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="44c515c1-f30f-44da-8959-cfd2530b46b7" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561510 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-updater" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561519 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-updater" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561533 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="rsync" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561540 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="rsync" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561569 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-auditor" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561579 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-auditor" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561592 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="11fab5ff-3041-45d3-8aab-29e25ed8c6ae" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561601 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="11fab5ff-3041-45d3-8aab-29e25ed8c6ae" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561612 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eacc0c6c-95c4-487f-945e-4a1e3e17c508" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561619 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="eacc0c6c-95c4-487f-945e-4a1e3e17c508" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561660 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-replicator" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561669 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-replicator" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561680 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="065b8624-7cdb-463c-9636-d3e980119eb7" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561688 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="065b8624-7cdb-463c-9636-d3e980119eb7" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561700 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="15539f33-874c-45ae-8ee2-7f821c54b267" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561709 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="15539f33-874c-45ae-8ee2-7f821c54b267" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561742 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22459bcc-672e-4390-89ae-2b5fa48ded71" containerName="galera" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561750 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="22459bcc-672e-4390-89ae-2b5fa48ded71" containerName="galera" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561762 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9f3dfaad-d451-448b-a447-47fc7bbff0e5" containerName="manager" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561770 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="9f3dfaad-d451-448b-a447-47fc7bbff0e5" containerName="manager" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561783 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="swift-recon-cron" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561791 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="swift-recon-cron" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561819 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561827 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-server" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561840 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-auditor" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561848 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-auditor" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561860 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-auditor" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561868 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-auditor" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561897 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19317a08-b18b-42c9-bdc9-394e1e06257d" containerName="setup-container" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561905 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="19317a08-b18b-42c9-bdc9-394e1e06257d" containerName="setup-container" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561918 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3b77f113-f8c0-47b8-ad79-d1be38bf6e09" containerName="operator" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561926 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="3b77f113-f8c0-47b8-ad79-d1be38bf6e09" containerName="operator" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561939 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561948 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-server" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561976 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-reaper" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.561984 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-reaper" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.561997 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dabb55da-08db-4d2a-8b2d-ac7b2b657053" containerName="keystone-api" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562005 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="dabb55da-08db-4d2a-8b2d-ac7b2b657053" containerName="keystone-api" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.562015 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562022 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-server" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.562034 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d783dd01-73a7-4362-888a-ab84bc8739df" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562043 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="d783dd01-73a7-4362-888a-ab84bc8739df" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: E0131 15:06:52.562054 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="22459bcc-672e-4390-89ae-2b5fa48ded71" containerName="mysql-bootstrap" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562092 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="22459bcc-672e-4390-89ae-2b5fa48ded71" containerName="mysql-bootstrap" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562261 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562275 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-updater" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562289 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="6578d137-d120-43b2-99e3-71d4f6525d6c" containerName="manager" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562299 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="19317a08-b18b-42c9-bdc9-394e1e06257d" containerName="rabbitmq" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562307 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-auditor" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562316 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562327 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="949efaf4-a5db-405d-9d40-c44d525c603c" containerName="mariadb-account-delete" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562336 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="44c515c1-f30f-44da-8959-cfd2530b46b7" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562348 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="d783dd01-73a7-4362-888a-ab84bc8739df" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562360 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="065b8624-7cdb-463c-9636-d3e980119eb7" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562371 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="14df28b7-d7cb-466e-aa07-69e320d71620" containerName="manager" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562382 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-reaper" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562394 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3b77f113-f8c0-47b8-ad79-d1be38bf6e09" containerName="operator" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562405 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-auditor" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562415 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="15539f33-874c-45ae-8ee2-7f821c54b267" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562424 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="dabb55da-08db-4d2a-8b2d-ac7b2b657053" containerName="keystone-api" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562453 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f70443db-a342-4f5d-81b2-39c01f494cf8" containerName="manager" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562464 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="11fab5ff-3041-45d3-8aab-29e25ed8c6ae" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562472 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="9dfaa3fc-8bf7-420f-8581-4e917bf3f41c" containerName="memcached" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562481 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="swift-recon-cron" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562492 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="49ea8aae-ad89-4383-8f2f-ba35872fd605" containerName="manager" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562501 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="account-replicator" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562510 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-replicator" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562521 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-auditor" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562533 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="rsync" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562543 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="3fcd9bac-c0cb-4de4-b630-0db07f110da7" containerName="galera" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562554 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="22459bcc-672e-4390-89ae-2b5fa48ded71" containerName="galera" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562563 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-expirer" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562574 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="08530f42-16c5-4253-a623-2a032aeb95a7" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562583 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="9f3dfaad-d451-448b-a447-47fc7bbff0e5" containerName="manager" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562592 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="eacc0c6c-95c4-487f-945e-4a1e3e17c508" containerName="registry-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562603 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="container-replicator" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562614 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-server" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562626 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="07a2906d-db30-4578-8b1e-088ca2f20ced" containerName="galera" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.562636 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="440e5809-7b49-4b21-99dd-668468c84017" containerName="object-updater" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.563349 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rzlpf/must-gather-k47wq" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.569595 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rzlpf"/"openshift-service-ca.crt" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.571998 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-rzlpf"/"kube-root-ca.crt" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.584912 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rzlpf/must-gather-k47wq"] Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.613749 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4br7s\" (UniqueName: \"kubernetes.io/projected/e85b3ee2-7979-400f-a052-d00fe6e44fd8-kube-api-access-4br7s\") pod \"must-gather-k47wq\" (UID: \"e85b3ee2-7979-400f-a052-d00fe6e44fd8\") " pod="openshift-must-gather-rzlpf/must-gather-k47wq" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.613884 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e85b3ee2-7979-400f-a052-d00fe6e44fd8-must-gather-output\") pod \"must-gather-k47wq\" (UID: \"e85b3ee2-7979-400f-a052-d00fe6e44fd8\") " pod="openshift-must-gather-rzlpf/must-gather-k47wq" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.715132 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e85b3ee2-7979-400f-a052-d00fe6e44fd8-must-gather-output\") pod \"must-gather-k47wq\" (UID: \"e85b3ee2-7979-400f-a052-d00fe6e44fd8\") " pod="openshift-must-gather-rzlpf/must-gather-k47wq" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.715185 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4br7s\" (UniqueName: \"kubernetes.io/projected/e85b3ee2-7979-400f-a052-d00fe6e44fd8-kube-api-access-4br7s\") pod \"must-gather-k47wq\" (UID: \"e85b3ee2-7979-400f-a052-d00fe6e44fd8\") " pod="openshift-must-gather-rzlpf/must-gather-k47wq" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.715527 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e85b3ee2-7979-400f-a052-d00fe6e44fd8-must-gather-output\") pod \"must-gather-k47wq\" (UID: \"e85b3ee2-7979-400f-a052-d00fe6e44fd8\") " pod="openshift-must-gather-rzlpf/must-gather-k47wq" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.731864 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4br7s\" (UniqueName: \"kubernetes.io/projected/e85b3ee2-7979-400f-a052-d00fe6e44fd8-kube-api-access-4br7s\") pod \"must-gather-k47wq\" (UID: \"e85b3ee2-7979-400f-a052-d00fe6e44fd8\") " pod="openshift-must-gather-rzlpf/must-gather-k47wq" Jan 31 15:06:52 crc kubenswrapper[4751]: I0131 15:06:52.877768 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rzlpf/must-gather-k47wq" Jan 31 15:06:53 crc kubenswrapper[4751]: I0131 15:06:53.295024 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rzlpf/must-gather-k47wq"] Jan 31 15:06:53 crc kubenswrapper[4751]: I0131 15:06:53.557090 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rzlpf/must-gather-k47wq" event={"ID":"e85b3ee2-7979-400f-a052-d00fe6e44fd8","Type":"ContainerStarted","Data":"596e9d64dc5ad58dd65790abe5253956e0af8e7593d0653f2fff18bf0269a2e6"} Jan 31 15:06:56 crc kubenswrapper[4751]: E0131 15:06:56.474041 4751 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 15:06:56 crc kubenswrapper[4751]: E0131 15:06:56.474743 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:06:56.974716218 +0000 UTC m=+1521.349429123 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : configmap "openstack-config" not found Jan 31 15:06:56 crc kubenswrapper[4751]: E0131 15:06:56.474075 4751 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 15:06:56 crc kubenswrapper[4751]: E0131 15:06:56.475278 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:06:56.975258642 +0000 UTC m=+1521.349971547 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : secret "openstack-config-secret" not found Jan 31 15:06:56 crc kubenswrapper[4751]: E0131 15:06:56.980885 4751 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 15:06:56 crc kubenswrapper[4751]: E0131 15:06:56.980915 4751 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 15:06:56 crc kubenswrapper[4751]: E0131 15:06:56.980981 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:06:57.980957603 +0000 UTC m=+1522.355670488 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : configmap "openstack-config" not found Jan 31 15:06:56 crc kubenswrapper[4751]: E0131 15:06:56.981003 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:06:57.980994724 +0000 UTC m=+1522.355707729 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : secret "openstack-config-secret" not found Jan 31 15:06:57 crc kubenswrapper[4751]: I0131 15:06:57.583784 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rzlpf/must-gather-k47wq" event={"ID":"e85b3ee2-7979-400f-a052-d00fe6e44fd8","Type":"ContainerStarted","Data":"73c40052238eafbfd679d14fc1f9ec13e388944d29049da084027f407ea9e611"} Jan 31 15:06:57 crc kubenswrapper[4751]: I0131 15:06:57.584139 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rzlpf/must-gather-k47wq" event={"ID":"e85b3ee2-7979-400f-a052-d00fe6e44fd8","Type":"ContainerStarted","Data":"6309ccb308ea404e3c296e7d0e15d3d0347ba6fcd799c7b7cbf6ada270683b8a"} Jan 31 15:06:57 crc kubenswrapper[4751]: I0131 15:06:57.598328 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rzlpf/must-gather-k47wq" podStartSLOduration=1.853880569 podStartE2EDuration="5.598303642s" podCreationTimestamp="2026-01-31 15:06:52 +0000 UTC" firstStartedPulling="2026-01-31 15:06:53.315988132 +0000 UTC m=+1517.690701017" lastFinishedPulling="2026-01-31 15:06:57.060411205 +0000 UTC m=+1521.435124090" observedRunningTime="2026-01-31 15:06:57.596886874 +0000 UTC m=+1521.971599759" watchObservedRunningTime="2026-01-31 15:06:57.598303642 +0000 UTC m=+1521.973016547" Jan 31 15:06:57 crc kubenswrapper[4751]: E0131 15:06:57.994558 4751 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 15:06:57 crc kubenswrapper[4751]: E0131 15:06:57.994639 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:06:59.994620816 +0000 UTC m=+1524.369333701 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : configmap "openstack-config" not found Jan 31 15:06:57 crc kubenswrapper[4751]: E0131 15:06:57.994638 4751 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 15:06:57 crc kubenswrapper[4751]: E0131 15:06:57.994712 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:06:59.994694448 +0000 UTC m=+1524.369407343 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : secret "openstack-config-secret" not found Jan 31 15:07:00 crc kubenswrapper[4751]: E0131 15:07:00.028168 4751 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 15:07:00 crc kubenswrapper[4751]: E0131 15:07:00.028322 4751 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 15:07:00 crc kubenswrapper[4751]: E0131 15:07:00.028473 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:07:04.028457693 +0000 UTC m=+1528.403170578 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : configmap "openstack-config" not found Jan 31 15:07:00 crc kubenswrapper[4751]: E0131 15:07:00.028595 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:07:04.028568806 +0000 UTC m=+1528.403281751 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : secret "openstack-config-secret" not found Jan 31 15:07:04 crc kubenswrapper[4751]: E0131 15:07:04.079397 4751 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 15:07:04 crc kubenswrapper[4751]: E0131 15:07:04.079748 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:07:12.079730762 +0000 UTC m=+1536.454443647 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : secret "openstack-config-secret" not found Jan 31 15:07:04 crc kubenswrapper[4751]: E0131 15:07:04.079405 4751 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 15:07:04 crc kubenswrapper[4751]: E0131 15:07:04.079850 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:07:12.079833275 +0000 UTC m=+1536.454546160 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : configmap "openstack-config" not found Jan 31 15:07:10 crc kubenswrapper[4751]: I0131 15:07:10.965972 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-4c48n"] Jan 31 15:07:10 crc kubenswrapper[4751]: I0131 15:07:10.967893 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:10 crc kubenswrapper[4751]: I0131 15:07:10.983769 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4c48n"] Jan 31 15:07:11 crc kubenswrapper[4751]: I0131 15:07:11.076683 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a49e54-556d-487d-8cdf-3fd3dc9442a5-utilities\") pod \"certified-operators-4c48n\" (UID: \"50a49e54-556d-487d-8cdf-3fd3dc9442a5\") " pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:11 crc kubenswrapper[4751]: I0131 15:07:11.076737 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a49e54-556d-487d-8cdf-3fd3dc9442a5-catalog-content\") pod \"certified-operators-4c48n\" (UID: \"50a49e54-556d-487d-8cdf-3fd3dc9442a5\") " pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:11 crc kubenswrapper[4751]: I0131 15:07:11.076778 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgxk8\" (UniqueName: \"kubernetes.io/projected/50a49e54-556d-487d-8cdf-3fd3dc9442a5-kube-api-access-kgxk8\") pod \"certified-operators-4c48n\" (UID: \"50a49e54-556d-487d-8cdf-3fd3dc9442a5\") " pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:11 crc kubenswrapper[4751]: I0131 15:07:11.178658 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a49e54-556d-487d-8cdf-3fd3dc9442a5-utilities\") pod \"certified-operators-4c48n\" (UID: \"50a49e54-556d-487d-8cdf-3fd3dc9442a5\") " pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:11 crc kubenswrapper[4751]: I0131 15:07:11.178713 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a49e54-556d-487d-8cdf-3fd3dc9442a5-catalog-content\") pod \"certified-operators-4c48n\" (UID: \"50a49e54-556d-487d-8cdf-3fd3dc9442a5\") " pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:11 crc kubenswrapper[4751]: I0131 15:07:11.178752 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-kgxk8\" (UniqueName: \"kubernetes.io/projected/50a49e54-556d-487d-8cdf-3fd3dc9442a5-kube-api-access-kgxk8\") pod \"certified-operators-4c48n\" (UID: \"50a49e54-556d-487d-8cdf-3fd3dc9442a5\") " pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:11 crc kubenswrapper[4751]: I0131 15:07:11.179222 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a49e54-556d-487d-8cdf-3fd3dc9442a5-utilities\") pod \"certified-operators-4c48n\" (UID: \"50a49e54-556d-487d-8cdf-3fd3dc9442a5\") " pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:11 crc kubenswrapper[4751]: I0131 15:07:11.179491 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a49e54-556d-487d-8cdf-3fd3dc9442a5-catalog-content\") pod \"certified-operators-4c48n\" (UID: \"50a49e54-556d-487d-8cdf-3fd3dc9442a5\") " pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:11 crc kubenswrapper[4751]: I0131 15:07:11.220330 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgxk8\" (UniqueName: \"kubernetes.io/projected/50a49e54-556d-487d-8cdf-3fd3dc9442a5-kube-api-access-kgxk8\") pod \"certified-operators-4c48n\" (UID: \"50a49e54-556d-487d-8cdf-3fd3dc9442a5\") " pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:11 crc kubenswrapper[4751]: I0131 15:07:11.288043 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:11 crc kubenswrapper[4751]: I0131 15:07:11.744814 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-4c48n"] Jan 31 15:07:11 crc kubenswrapper[4751]: W0131 15:07:11.749326 4751 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod50a49e54_556d_487d_8cdf_3fd3dc9442a5.slice/crio-257e8fb19ff2e9763f697cfaf1e9176eb8fd5fdd2c4c4d5de33e938223dd6f02 WatchSource:0}: Error finding container 257e8fb19ff2e9763f697cfaf1e9176eb8fd5fdd2c4c4d5de33e938223dd6f02: Status 404 returned error can't find the container with id 257e8fb19ff2e9763f697cfaf1e9176eb8fd5fdd2c4c4d5de33e938223dd6f02 Jan 31 15:07:12 crc kubenswrapper[4751]: E0131 15:07:12.089854 4751 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 15:07:12 crc kubenswrapper[4751]: E0131 15:07:12.090734 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:07:28.090717387 +0000 UTC m=+1552.465430262 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : secret "openstack-config-secret" not found Jan 31 15:07:12 crc kubenswrapper[4751]: E0131 15:07:12.089850 4751 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 15:07:12 crc kubenswrapper[4751]: E0131 15:07:12.090887 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:07:28.090877311 +0000 UTC m=+1552.465590196 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : configmap "openstack-config" not found Jan 31 15:07:12 crc kubenswrapper[4751]: I0131 15:07:12.681173 4751 generic.go:334] "Generic (PLEG): container finished" podID="50a49e54-556d-487d-8cdf-3fd3dc9442a5" containerID="c14d46d646ea8b4cc9763419a629506b15e87c33ed47f956ba8d19439f02c98f" exitCode=0 Jan 31 15:07:12 crc kubenswrapper[4751]: I0131 15:07:12.681218 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c48n" event={"ID":"50a49e54-556d-487d-8cdf-3fd3dc9442a5","Type":"ContainerDied","Data":"c14d46d646ea8b4cc9763419a629506b15e87c33ed47f956ba8d19439f02c98f"} Jan 31 15:07:12 crc kubenswrapper[4751]: I0131 15:07:12.681242 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c48n" event={"ID":"50a49e54-556d-487d-8cdf-3fd3dc9442a5","Type":"ContainerStarted","Data":"257e8fb19ff2e9763f697cfaf1e9176eb8fd5fdd2c4c4d5de33e938223dd6f02"} Jan 31 15:07:14 crc kubenswrapper[4751]: I0131 15:07:14.701958 4751 generic.go:334] "Generic (PLEG): container finished" podID="50a49e54-556d-487d-8cdf-3fd3dc9442a5" containerID="cb3b940cfe06f226fc954ce923f02b1e0af3b36fac9a300e9eaafb89bb90805e" exitCode=0 Jan 31 15:07:14 crc kubenswrapper[4751]: I0131 15:07:14.702025 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c48n" event={"ID":"50a49e54-556d-487d-8cdf-3fd3dc9442a5","Type":"ContainerDied","Data":"cb3b940cfe06f226fc954ce923f02b1e0af3b36fac9a300e9eaafb89bb90805e"} Jan 31 15:07:15 crc kubenswrapper[4751]: I0131 15:07:15.716509 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c48n" event={"ID":"50a49e54-556d-487d-8cdf-3fd3dc9442a5","Type":"ContainerStarted","Data":"90a4c2e5bfaff4cb9a64ddde999d46f6a1c7891174e7e8800ba33b52b355c79f"} Jan 31 15:07:15 crc kubenswrapper[4751]: I0131 15:07:15.753917 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-4c48n" podStartSLOduration=3.364306317 podStartE2EDuration="5.753897151s" podCreationTimestamp="2026-01-31 15:07:10 +0000 UTC" firstStartedPulling="2026-01-31 15:07:12.682971588 +0000 UTC m=+1537.057684473" lastFinishedPulling="2026-01-31 15:07:15.072562422 +0000 UTC m=+1539.447275307" observedRunningTime="2026-01-31 15:07:15.74895928 +0000 UTC m=+1540.123672185" watchObservedRunningTime="2026-01-31 15:07:15.753897151 +0000 UTC m=+1540.128610046" Jan 31 15:07:20 crc kubenswrapper[4751]: I0131 15:07:20.910709 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-76zwv"] Jan 31 15:07:20 crc kubenswrapper[4751]: I0131 15:07:20.913209 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:20 crc kubenswrapper[4751]: I0131 15:07:20.930310 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-76zwv"] Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.015824 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6cwxr\" (UniqueName: \"kubernetes.io/projected/ca5a5a5e-fdc7-409c-b452-44b84779eba2-kube-api-access-6cwxr\") pod \"redhat-marketplace-76zwv\" (UID: \"ca5a5a5e-fdc7-409c-b452-44b84779eba2\") " pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.015897 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca5a5a5e-fdc7-409c-b452-44b84779eba2-utilities\") pod \"redhat-marketplace-76zwv\" (UID: \"ca5a5a5e-fdc7-409c-b452-44b84779eba2\") " pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.016195 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca5a5a5e-fdc7-409c-b452-44b84779eba2-catalog-content\") pod \"redhat-marketplace-76zwv\" (UID: \"ca5a5a5e-fdc7-409c-b452-44b84779eba2\") " pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.117755 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca5a5a5e-fdc7-409c-b452-44b84779eba2-catalog-content\") pod \"redhat-marketplace-76zwv\" (UID: \"ca5a5a5e-fdc7-409c-b452-44b84779eba2\") " pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.117841 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6cwxr\" (UniqueName: \"kubernetes.io/projected/ca5a5a5e-fdc7-409c-b452-44b84779eba2-kube-api-access-6cwxr\") pod \"redhat-marketplace-76zwv\" (UID: \"ca5a5a5e-fdc7-409c-b452-44b84779eba2\") " pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.117905 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca5a5a5e-fdc7-409c-b452-44b84779eba2-utilities\") pod \"redhat-marketplace-76zwv\" (UID: \"ca5a5a5e-fdc7-409c-b452-44b84779eba2\") " pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.118653 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca5a5a5e-fdc7-409c-b452-44b84779eba2-utilities\") pod \"redhat-marketplace-76zwv\" (UID: \"ca5a5a5e-fdc7-409c-b452-44b84779eba2\") " pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.119011 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca5a5a5e-fdc7-409c-b452-44b84779eba2-catalog-content\") pod \"redhat-marketplace-76zwv\" (UID: \"ca5a5a5e-fdc7-409c-b452-44b84779eba2\") " pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.146309 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6cwxr\" (UniqueName: \"kubernetes.io/projected/ca5a5a5e-fdc7-409c-b452-44b84779eba2-kube-api-access-6cwxr\") pod \"redhat-marketplace-76zwv\" (UID: \"ca5a5a5e-fdc7-409c-b452-44b84779eba2\") " pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.230288 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.288746 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.289039 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.337125 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.672121 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-76zwv"] Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.755436 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76zwv" event={"ID":"ca5a5a5e-fdc7-409c-b452-44b84779eba2","Type":"ContainerStarted","Data":"304775135fd2604e43538146d5fba66160f453d9567898a2957c9c65dc840cad"} Jan 31 15:07:21 crc kubenswrapper[4751]: I0131 15:07:21.798287 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:22 crc kubenswrapper[4751]: I0131 15:07:22.764227 4751 generic.go:334] "Generic (PLEG): container finished" podID="ca5a5a5e-fdc7-409c-b452-44b84779eba2" containerID="23d1a9651161a01cd1bee04fc08b679a060106e1e39fd6accef9df2a385409fb" exitCode=0 Jan 31 15:07:22 crc kubenswrapper[4751]: I0131 15:07:22.765344 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76zwv" event={"ID":"ca5a5a5e-fdc7-409c-b452-44b84779eba2","Type":"ContainerDied","Data":"23d1a9651161a01cd1bee04fc08b679a060106e1e39fd6accef9df2a385409fb"} Jan 31 15:07:23 crc kubenswrapper[4751]: I0131 15:07:23.692722 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4c48n"] Jan 31 15:07:23 crc kubenswrapper[4751]: I0131 15:07:23.773128 4751 generic.go:334] "Generic (PLEG): container finished" podID="ca5a5a5e-fdc7-409c-b452-44b84779eba2" containerID="108d9e539af6c029e8ea242e46655d6d091d2b9ce6876bc3e568fb564b94984a" exitCode=0 Jan 31 15:07:23 crc kubenswrapper[4751]: I0131 15:07:23.773171 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76zwv" event={"ID":"ca5a5a5e-fdc7-409c-b452-44b84779eba2","Type":"ContainerDied","Data":"108d9e539af6c029e8ea242e46655d6d091d2b9ce6876bc3e568fb564b94984a"} Jan 31 15:07:23 crc kubenswrapper[4751]: I0131 15:07:23.773333 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-4c48n" podUID="50a49e54-556d-487d-8cdf-3fd3dc9442a5" containerName="registry-server" containerID="cri-o://90a4c2e5bfaff4cb9a64ddde999d46f6a1c7891174e7e8800ba33b52b355c79f" gracePeriod=2 Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.176850 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.269602 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgxk8\" (UniqueName: \"kubernetes.io/projected/50a49e54-556d-487d-8cdf-3fd3dc9442a5-kube-api-access-kgxk8\") pod \"50a49e54-556d-487d-8cdf-3fd3dc9442a5\" (UID: \"50a49e54-556d-487d-8cdf-3fd3dc9442a5\") " Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.269694 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a49e54-556d-487d-8cdf-3fd3dc9442a5-catalog-content\") pod \"50a49e54-556d-487d-8cdf-3fd3dc9442a5\" (UID: \"50a49e54-556d-487d-8cdf-3fd3dc9442a5\") " Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.269730 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a49e54-556d-487d-8cdf-3fd3dc9442a5-utilities\") pod \"50a49e54-556d-487d-8cdf-3fd3dc9442a5\" (UID: \"50a49e54-556d-487d-8cdf-3fd3dc9442a5\") " Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.270905 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50a49e54-556d-487d-8cdf-3fd3dc9442a5-utilities" (OuterVolumeSpecName: "utilities") pod "50a49e54-556d-487d-8cdf-3fd3dc9442a5" (UID: "50a49e54-556d-487d-8cdf-3fd3dc9442a5"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.274740 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/50a49e54-556d-487d-8cdf-3fd3dc9442a5-kube-api-access-kgxk8" (OuterVolumeSpecName: "kube-api-access-kgxk8") pod "50a49e54-556d-487d-8cdf-3fd3dc9442a5" (UID: "50a49e54-556d-487d-8cdf-3fd3dc9442a5"). InnerVolumeSpecName "kube-api-access-kgxk8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.329919 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/50a49e54-556d-487d-8cdf-3fd3dc9442a5-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "50a49e54-556d-487d-8cdf-3fd3dc9442a5" (UID: "50a49e54-556d-487d-8cdf-3fd3dc9442a5"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.371407 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kgxk8\" (UniqueName: \"kubernetes.io/projected/50a49e54-556d-487d-8cdf-3fd3dc9442a5-kube-api-access-kgxk8\") on node \"crc\" DevicePath \"\"" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.371440 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/50a49e54-556d-487d-8cdf-3fd3dc9442a5-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.371449 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/50a49e54-556d-487d-8cdf-3fd3dc9442a5-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.782208 4751 generic.go:334] "Generic (PLEG): container finished" podID="50a49e54-556d-487d-8cdf-3fd3dc9442a5" containerID="90a4c2e5bfaff4cb9a64ddde999d46f6a1c7891174e7e8800ba33b52b355c79f" exitCode=0 Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.782294 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-4c48n" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.782299 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c48n" event={"ID":"50a49e54-556d-487d-8cdf-3fd3dc9442a5","Type":"ContainerDied","Data":"90a4c2e5bfaff4cb9a64ddde999d46f6a1c7891174e7e8800ba33b52b355c79f"} Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.782701 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-4c48n" event={"ID":"50a49e54-556d-487d-8cdf-3fd3dc9442a5","Type":"ContainerDied","Data":"257e8fb19ff2e9763f697cfaf1e9176eb8fd5fdd2c4c4d5de33e938223dd6f02"} Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.782731 4751 scope.go:117] "RemoveContainer" containerID="90a4c2e5bfaff4cb9a64ddde999d46f6a1c7891174e7e8800ba33b52b355c79f" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.786016 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76zwv" event={"ID":"ca5a5a5e-fdc7-409c-b452-44b84779eba2","Type":"ContainerStarted","Data":"313ed8d08566eef3d80118fd1b73dbf2b891937d2a28d957adf4fe8bc1065d48"} Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.797334 4751 scope.go:117] "RemoveContainer" containerID="cb3b940cfe06f226fc954ce923f02b1e0af3b36fac9a300e9eaafb89bb90805e" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.802618 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-4c48n"] Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.812213 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-4c48n"] Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.815909 4751 scope.go:117] "RemoveContainer" containerID="c14d46d646ea8b4cc9763419a629506b15e87c33ed47f956ba8d19439f02c98f" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.831830 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-76zwv" podStartSLOduration=3.392602762 podStartE2EDuration="4.831815094s" podCreationTimestamp="2026-01-31 15:07:20 +0000 UTC" firstStartedPulling="2026-01-31 15:07:22.766342016 +0000 UTC m=+1547.141054891" lastFinishedPulling="2026-01-31 15:07:24.205554328 +0000 UTC m=+1548.580267223" observedRunningTime="2026-01-31 15:07:24.827788226 +0000 UTC m=+1549.202501111" watchObservedRunningTime="2026-01-31 15:07:24.831815094 +0000 UTC m=+1549.206527979" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.833325 4751 scope.go:117] "RemoveContainer" containerID="90a4c2e5bfaff4cb9a64ddde999d46f6a1c7891174e7e8800ba33b52b355c79f" Jan 31 15:07:24 crc kubenswrapper[4751]: E0131 15:07:24.834410 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"90a4c2e5bfaff4cb9a64ddde999d46f6a1c7891174e7e8800ba33b52b355c79f\": container with ID starting with 90a4c2e5bfaff4cb9a64ddde999d46f6a1c7891174e7e8800ba33b52b355c79f not found: ID does not exist" containerID="90a4c2e5bfaff4cb9a64ddde999d46f6a1c7891174e7e8800ba33b52b355c79f" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.834447 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"90a4c2e5bfaff4cb9a64ddde999d46f6a1c7891174e7e8800ba33b52b355c79f"} err="failed to get container status \"90a4c2e5bfaff4cb9a64ddde999d46f6a1c7891174e7e8800ba33b52b355c79f\": rpc error: code = NotFound desc = could not find container \"90a4c2e5bfaff4cb9a64ddde999d46f6a1c7891174e7e8800ba33b52b355c79f\": container with ID starting with 90a4c2e5bfaff4cb9a64ddde999d46f6a1c7891174e7e8800ba33b52b355c79f not found: ID does not exist" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.834477 4751 scope.go:117] "RemoveContainer" containerID="cb3b940cfe06f226fc954ce923f02b1e0af3b36fac9a300e9eaafb89bb90805e" Jan 31 15:07:24 crc kubenswrapper[4751]: E0131 15:07:24.834798 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"cb3b940cfe06f226fc954ce923f02b1e0af3b36fac9a300e9eaafb89bb90805e\": container with ID starting with cb3b940cfe06f226fc954ce923f02b1e0af3b36fac9a300e9eaafb89bb90805e not found: ID does not exist" containerID="cb3b940cfe06f226fc954ce923f02b1e0af3b36fac9a300e9eaafb89bb90805e" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.834904 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"cb3b940cfe06f226fc954ce923f02b1e0af3b36fac9a300e9eaafb89bb90805e"} err="failed to get container status \"cb3b940cfe06f226fc954ce923f02b1e0af3b36fac9a300e9eaafb89bb90805e\": rpc error: code = NotFound desc = could not find container \"cb3b940cfe06f226fc954ce923f02b1e0af3b36fac9a300e9eaafb89bb90805e\": container with ID starting with cb3b940cfe06f226fc954ce923f02b1e0af3b36fac9a300e9eaafb89bb90805e not found: ID does not exist" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.834996 4751 scope.go:117] "RemoveContainer" containerID="c14d46d646ea8b4cc9763419a629506b15e87c33ed47f956ba8d19439f02c98f" Jan 31 15:07:24 crc kubenswrapper[4751]: E0131 15:07:24.835385 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c14d46d646ea8b4cc9763419a629506b15e87c33ed47f956ba8d19439f02c98f\": container with ID starting with c14d46d646ea8b4cc9763419a629506b15e87c33ed47f956ba8d19439f02c98f not found: ID does not exist" containerID="c14d46d646ea8b4cc9763419a629506b15e87c33ed47f956ba8d19439f02c98f" Jan 31 15:07:24 crc kubenswrapper[4751]: I0131 15:07:24.835416 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c14d46d646ea8b4cc9763419a629506b15e87c33ed47f956ba8d19439f02c98f"} err="failed to get container status \"c14d46d646ea8b4cc9763419a629506b15e87c33ed47f956ba8d19439f02c98f\": rpc error: code = NotFound desc = could not find container \"c14d46d646ea8b4cc9763419a629506b15e87c33ed47f956ba8d19439f02c98f\": container with ID starting with c14d46d646ea8b4cc9763419a629506b15e87c33ed47f956ba8d19439f02c98f not found: ID does not exist" Jan 31 15:07:26 crc kubenswrapper[4751]: I0131 15:07:26.415350 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="50a49e54-556d-487d-8cdf-3fd3dc9442a5" path="/var/lib/kubelet/pods/50a49e54-556d-487d-8cdf-3fd3dc9442a5/volumes" Jan 31 15:07:28 crc kubenswrapper[4751]: E0131 15:07:28.119927 4751 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 15:07:28 crc kubenswrapper[4751]: E0131 15:07:28.120352 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:08:00.120330439 +0000 UTC m=+1584.495043334 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : configmap "openstack-config" not found Jan 31 15:07:28 crc kubenswrapper[4751]: E0131 15:07:28.119977 4751 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 15:07:28 crc kubenswrapper[4751]: E0131 15:07:28.120470 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:08:00.120441062 +0000 UTC m=+1584.495153987 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : secret "openstack-config-secret" not found Jan 31 15:07:31 crc kubenswrapper[4751]: I0131 15:07:31.230599 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:31 crc kubenswrapper[4751]: I0131 15:07:31.230973 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:31 crc kubenswrapper[4751]: I0131 15:07:31.280796 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:31 crc kubenswrapper[4751]: I0131 15:07:31.878629 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:31 crc kubenswrapper[4751]: I0131 15:07:31.923139 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-76zwv"] Jan 31 15:07:33 crc kubenswrapper[4751]: I0131 15:07:33.859491 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-76zwv" podUID="ca5a5a5e-fdc7-409c-b452-44b84779eba2" containerName="registry-server" containerID="cri-o://313ed8d08566eef3d80118fd1b73dbf2b891937d2a28d957adf4fe8bc1065d48" gracePeriod=2 Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.751776 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.815873 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca5a5a5e-fdc7-409c-b452-44b84779eba2-catalog-content\") pod \"ca5a5a5e-fdc7-409c-b452-44b84779eba2\" (UID: \"ca5a5a5e-fdc7-409c-b452-44b84779eba2\") " Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.815980 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca5a5a5e-fdc7-409c-b452-44b84779eba2-utilities\") pod \"ca5a5a5e-fdc7-409c-b452-44b84779eba2\" (UID: \"ca5a5a5e-fdc7-409c-b452-44b84779eba2\") " Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.816016 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6cwxr\" (UniqueName: \"kubernetes.io/projected/ca5a5a5e-fdc7-409c-b452-44b84779eba2-kube-api-access-6cwxr\") pod \"ca5a5a5e-fdc7-409c-b452-44b84779eba2\" (UID: \"ca5a5a5e-fdc7-409c-b452-44b84779eba2\") " Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.817289 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca5a5a5e-fdc7-409c-b452-44b84779eba2-utilities" (OuterVolumeSpecName: "utilities") pod "ca5a5a5e-fdc7-409c-b452-44b84779eba2" (UID: "ca5a5a5e-fdc7-409c-b452-44b84779eba2"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.839310 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ca5a5a5e-fdc7-409c-b452-44b84779eba2-kube-api-access-6cwxr" (OuterVolumeSpecName: "kube-api-access-6cwxr") pod "ca5a5a5e-fdc7-409c-b452-44b84779eba2" (UID: "ca5a5a5e-fdc7-409c-b452-44b84779eba2"). InnerVolumeSpecName "kube-api-access-6cwxr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.841369 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ca5a5a5e-fdc7-409c-b452-44b84779eba2-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ca5a5a5e-fdc7-409c-b452-44b84779eba2" (UID: "ca5a5a5e-fdc7-409c-b452-44b84779eba2"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.866424 4751 generic.go:334] "Generic (PLEG): container finished" podID="ca5a5a5e-fdc7-409c-b452-44b84779eba2" containerID="313ed8d08566eef3d80118fd1b73dbf2b891937d2a28d957adf4fe8bc1065d48" exitCode=0 Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.866459 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76zwv" event={"ID":"ca5a5a5e-fdc7-409c-b452-44b84779eba2","Type":"ContainerDied","Data":"313ed8d08566eef3d80118fd1b73dbf2b891937d2a28d957adf4fe8bc1065d48"} Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.866482 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-76zwv" event={"ID":"ca5a5a5e-fdc7-409c-b452-44b84779eba2","Type":"ContainerDied","Data":"304775135fd2604e43538146d5fba66160f453d9567898a2957c9c65dc840cad"} Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.866499 4751 scope.go:117] "RemoveContainer" containerID="313ed8d08566eef3d80118fd1b73dbf2b891937d2a28d957adf4fe8bc1065d48" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.866539 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-76zwv" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.898656 4751 scope.go:117] "RemoveContainer" containerID="108d9e539af6c029e8ea242e46655d6d091d2b9ce6876bc3e568fb564b94984a" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.905063 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-76zwv"] Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.910543 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-76zwv"] Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.925992 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ca5a5a5e-fdc7-409c-b452-44b84779eba2-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.926025 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ca5a5a5e-fdc7-409c-b452-44b84779eba2-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.926041 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6cwxr\" (UniqueName: \"kubernetes.io/projected/ca5a5a5e-fdc7-409c-b452-44b84779eba2-kube-api-access-6cwxr\") on node \"crc\" DevicePath \"\"" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.926279 4751 scope.go:117] "RemoveContainer" containerID="23d1a9651161a01cd1bee04fc08b679a060106e1e39fd6accef9df2a385409fb" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.946767 4751 scope.go:117] "RemoveContainer" containerID="313ed8d08566eef3d80118fd1b73dbf2b891937d2a28d957adf4fe8bc1065d48" Jan 31 15:07:35 crc kubenswrapper[4751]: E0131 15:07:34.947299 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"313ed8d08566eef3d80118fd1b73dbf2b891937d2a28d957adf4fe8bc1065d48\": container with ID starting with 313ed8d08566eef3d80118fd1b73dbf2b891937d2a28d957adf4fe8bc1065d48 not found: ID does not exist" containerID="313ed8d08566eef3d80118fd1b73dbf2b891937d2a28d957adf4fe8bc1065d48" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.947339 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"313ed8d08566eef3d80118fd1b73dbf2b891937d2a28d957adf4fe8bc1065d48"} err="failed to get container status \"313ed8d08566eef3d80118fd1b73dbf2b891937d2a28d957adf4fe8bc1065d48\": rpc error: code = NotFound desc = could not find container \"313ed8d08566eef3d80118fd1b73dbf2b891937d2a28d957adf4fe8bc1065d48\": container with ID starting with 313ed8d08566eef3d80118fd1b73dbf2b891937d2a28d957adf4fe8bc1065d48 not found: ID does not exist" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.947371 4751 scope.go:117] "RemoveContainer" containerID="108d9e539af6c029e8ea242e46655d6d091d2b9ce6876bc3e568fb564b94984a" Jan 31 15:07:35 crc kubenswrapper[4751]: E0131 15:07:34.947817 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"108d9e539af6c029e8ea242e46655d6d091d2b9ce6876bc3e568fb564b94984a\": container with ID starting with 108d9e539af6c029e8ea242e46655d6d091d2b9ce6876bc3e568fb564b94984a not found: ID does not exist" containerID="108d9e539af6c029e8ea242e46655d6d091d2b9ce6876bc3e568fb564b94984a" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.947865 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"108d9e539af6c029e8ea242e46655d6d091d2b9ce6876bc3e568fb564b94984a"} err="failed to get container status \"108d9e539af6c029e8ea242e46655d6d091d2b9ce6876bc3e568fb564b94984a\": rpc error: code = NotFound desc = could not find container \"108d9e539af6c029e8ea242e46655d6d091d2b9ce6876bc3e568fb564b94984a\": container with ID starting with 108d9e539af6c029e8ea242e46655d6d091d2b9ce6876bc3e568fb564b94984a not found: ID does not exist" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.947891 4751 scope.go:117] "RemoveContainer" containerID="23d1a9651161a01cd1bee04fc08b679a060106e1e39fd6accef9df2a385409fb" Jan 31 15:07:35 crc kubenswrapper[4751]: E0131 15:07:34.948182 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23d1a9651161a01cd1bee04fc08b679a060106e1e39fd6accef9df2a385409fb\": container with ID starting with 23d1a9651161a01cd1bee04fc08b679a060106e1e39fd6accef9df2a385409fb not found: ID does not exist" containerID="23d1a9651161a01cd1bee04fc08b679a060106e1e39fd6accef9df2a385409fb" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:34.948226 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23d1a9651161a01cd1bee04fc08b679a060106e1e39fd6accef9df2a385409fb"} err="failed to get container status \"23d1a9651161a01cd1bee04fc08b679a060106e1e39fd6accef9df2a385409fb\": rpc error: code = NotFound desc = could not find container \"23d1a9651161a01cd1bee04fc08b679a060106e1e39fd6accef9df2a385409fb\": container with ID starting with 23d1a9651161a01cd1bee04fc08b679a060106e1e39fd6accef9df2a385409fb not found: ID does not exist" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:35.244793 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f_eec59a88-8f4d-4482-aa2a-11a508cc3a79/util/0.log" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:35.444898 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f_eec59a88-8f4d-4482-aa2a-11a508cc3a79/util/0.log" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:35.490511 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f_eec59a88-8f4d-4482-aa2a-11a508cc3a79/pull/0.log" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:35.502571 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f_eec59a88-8f4d-4482-aa2a-11a508cc3a79/pull/0.log" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:35.636789 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f_eec59a88-8f4d-4482-aa2a-11a508cc3a79/util/0.log" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:35.649084 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f_eec59a88-8f4d-4482-aa2a-11a508cc3a79/extract/0.log" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:35.672517 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f_eec59a88-8f4d-4482-aa2a-11a508cc3a79/pull/0.log" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:35.781510 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d4cb5b58-r8xn7_91cc4333-403a-4ce4-a347-8b475ad0169a/manager/0.log" Jan 31 15:07:35 crc kubenswrapper[4751]: I0131 15:07:35.850528 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-index-vjs56_95bedc09-cab6-4e6b-a210-8cb1f8b39601/registry-server/0.log" Jan 31 15:07:36 crc kubenswrapper[4751]: I0131 15:07:36.416604 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ca5a5a5e-fdc7-409c-b452-44b84779eba2" path="/var/lib/kubelet/pods/ca5a5a5e-fdc7-409c-b452-44b84779eba2/volumes" Jan 31 15:07:47 crc kubenswrapper[4751]: I0131 15:07:47.173278 4751 scope.go:117] "RemoveContainer" containerID="f2d3ac70f8ddad94f9d969f2045d3e2ecc9acc9f7ef1ceb69fe7a6e69910af4e" Jan 31 15:07:47 crc kubenswrapper[4751]: I0131 15:07:47.216624 4751 scope.go:117] "RemoveContainer" containerID="7e789eeabd8afc4f9d1d5096f902a1d03746cbe8acdf7df1c1fc6d2741b5975c" Jan 31 15:07:47 crc kubenswrapper[4751]: I0131 15:07:47.235312 4751 scope.go:117] "RemoveContainer" containerID="9903c977627bd13e9ad2f5f25c1001bf58623795a6fa400f5ca5b3724b524577" Jan 31 15:07:47 crc kubenswrapper[4751]: I0131 15:07:47.255151 4751 scope.go:117] "RemoveContainer" containerID="1b08739497c3b40bf4675eac8a3f77cfbe93709c363b0f7d316a1a53ab0f3eab" Jan 31 15:07:47 crc kubenswrapper[4751]: I0131 15:07:47.278396 4751 scope.go:117] "RemoveContainer" containerID="488f4cd159917294625dbe3f504270e4c6cae704ed670c29ddb28b43bab332ff" Jan 31 15:07:47 crc kubenswrapper[4751]: I0131 15:07:47.292109 4751 scope.go:117] "RemoveContainer" containerID="5da73e1408c3942c575e820ab3bbf5f7e673d6aadac72064d98cb22aab529aa9" Jan 31 15:07:47 crc kubenswrapper[4751]: I0131 15:07:47.316208 4751 scope.go:117] "RemoveContainer" containerID="2def03042cdbf5505276d6eb76695378d7a0c3b7b97a2d260b2bb7c00d1d66d9" Jan 31 15:07:47 crc kubenswrapper[4751]: I0131 15:07:47.337128 4751 scope.go:117] "RemoveContainer" containerID="4fd861ffb49593c05c4ca2dc031ea7913a88e0c31f7cbaf913eca6a5819336ff" Jan 31 15:07:47 crc kubenswrapper[4751]: I0131 15:07:47.988700 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-h4drr_5c630253-f658-44fb-891d-f560f1e2b577/control-plane-machine-set-operator/0.log" Jan 31 15:07:48 crc kubenswrapper[4751]: I0131 15:07:48.117750 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4gqrl_bcd7a932-6db9-4cca-b619-852242324725/kube-rbac-proxy/0.log" Jan 31 15:07:48 crc kubenswrapper[4751]: I0131 15:07:48.164174 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4gqrl_bcd7a932-6db9-4cca-b619-852242324725/machine-api-operator/0.log" Jan 31 15:08:00 crc kubenswrapper[4751]: E0131 15:08:00.176422 4751 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 15:08:00 crc kubenswrapper[4751]: E0131 15:08:00.176448 4751 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 15:08:00 crc kubenswrapper[4751]: E0131 15:08:00.177287 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:09:04.177257883 +0000 UTC m=+1648.551970798 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : configmap "openstack-config" not found Jan 31 15:08:00 crc kubenswrapper[4751]: E0131 15:08:00.177370 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:09:04.177338385 +0000 UTC m=+1648.552051310 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : secret "openstack-config-secret" not found Jan 31 15:08:15 crc kubenswrapper[4751]: I0131 15:08:15.334138 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6dhf9_6b667c31-e911-496a-9c8b-12c906e724ec/kube-rbac-proxy/0.log" Jan 31 15:08:15 crc kubenswrapper[4751]: I0131 15:08:15.368110 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6dhf9_6b667c31-e911-496a-9c8b-12c906e724ec/controller/0.log" Jan 31 15:08:15 crc kubenswrapper[4751]: I0131 15:08:15.544388 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-frr-files/0.log" Jan 31 15:08:15 crc kubenswrapper[4751]: I0131 15:08:15.740328 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-metrics/0.log" Jan 31 15:08:15 crc kubenswrapper[4751]: I0131 15:08:15.757113 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-reloader/0.log" Jan 31 15:08:15 crc kubenswrapper[4751]: I0131 15:08:15.782344 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-frr-files/0.log" Jan 31 15:08:15 crc kubenswrapper[4751]: I0131 15:08:15.783797 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-reloader/0.log" Jan 31 15:08:15 crc kubenswrapper[4751]: I0131 15:08:15.997230 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-reloader/0.log" Jan 31 15:08:15 crc kubenswrapper[4751]: I0131 15:08:15.999121 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-frr-files/0.log" Jan 31 15:08:16 crc kubenswrapper[4751]: I0131 15:08:16.007627 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-metrics/0.log" Jan 31 15:08:16 crc kubenswrapper[4751]: I0131 15:08:16.036295 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-metrics/0.log" Jan 31 15:08:16 crc kubenswrapper[4751]: I0131 15:08:16.166340 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-frr-files/0.log" Jan 31 15:08:16 crc kubenswrapper[4751]: I0131 15:08:16.166560 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-metrics/0.log" Jan 31 15:08:16 crc kubenswrapper[4751]: I0131 15:08:16.168925 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-reloader/0.log" Jan 31 15:08:16 crc kubenswrapper[4751]: I0131 15:08:16.220781 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/controller/0.log" Jan 31 15:08:16 crc kubenswrapper[4751]: I0131 15:08:16.336957 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/kube-rbac-proxy/0.log" Jan 31 15:08:16 crc kubenswrapper[4751]: I0131 15:08:16.367368 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/frr-metrics/0.log" Jan 31 15:08:16 crc kubenswrapper[4751]: I0131 15:08:16.410948 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/kube-rbac-proxy-frr/0.log" Jan 31 15:08:16 crc kubenswrapper[4751]: I0131 15:08:16.567239 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/reloader/0.log" Jan 31 15:08:16 crc kubenswrapper[4751]: I0131 15:08:16.646744 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-qf86j_94655b12-be6a-4043-8f7c-80d1b7fb1a2f/frr-k8s-webhook-server/0.log" Jan 31 15:08:16 crc kubenswrapper[4751]: I0131 15:08:16.751336 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6b999687d7-vf7mn_bd60e998-83e4-442a-98ac-c4e33d4b4765/manager/0.log" Jan 31 15:08:16 crc kubenswrapper[4751]: I0131 15:08:16.992483 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5c46dd7d46-8xt78_01320eb9-ccb5-4593-866a-f49553fa7262/webhook-server/0.log" Jan 31 15:08:17 crc kubenswrapper[4751]: I0131 15:08:17.082687 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/frr/0.log" Jan 31 15:08:17 crc kubenswrapper[4751]: I0131 15:08:17.097583 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qv6gh_7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc/kube-rbac-proxy/0.log" Jan 31 15:08:17 crc kubenswrapper[4751]: I0131 15:08:17.308704 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qv6gh_7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc/speaker/0.log" Jan 31 15:08:29 crc kubenswrapper[4751]: I0131 15:08:29.846752 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstackclient_eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb/openstackclient/0.log" Jan 31 15:08:38 crc kubenswrapper[4751]: I0131 15:08:38.896832 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:08:38 crc kubenswrapper[4751]: I0131 15:08:38.897472 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:08:42 crc kubenswrapper[4751]: I0131 15:08:42.197787 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz_f3380dc7-49d9-4d61-a0bb-003c1c5e2742/util/0.log" Jan 31 15:08:42 crc kubenswrapper[4751]: I0131 15:08:42.420524 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz_f3380dc7-49d9-4d61-a0bb-003c1c5e2742/util/0.log" Jan 31 15:08:42 crc kubenswrapper[4751]: I0131 15:08:42.444111 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz_f3380dc7-49d9-4d61-a0bb-003c1c5e2742/pull/0.log" Jan 31 15:08:42 crc kubenswrapper[4751]: I0131 15:08:42.453277 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz_f3380dc7-49d9-4d61-a0bb-003c1c5e2742/pull/0.log" Jan 31 15:08:42 crc kubenswrapper[4751]: I0131 15:08:42.621132 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz_f3380dc7-49d9-4d61-a0bb-003c1c5e2742/util/0.log" Jan 31 15:08:42 crc kubenswrapper[4751]: I0131 15:08:42.632043 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz_f3380dc7-49d9-4d61-a0bb-003c1c5e2742/pull/0.log" Jan 31 15:08:42 crc kubenswrapper[4751]: I0131 15:08:42.640605 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz_f3380dc7-49d9-4d61-a0bb-003c1c5e2742/extract/0.log" Jan 31 15:08:42 crc kubenswrapper[4751]: I0131 15:08:42.795017 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcs7h_c83f0a10-f56b-4795-93b9-ee224d439648/extract-utilities/0.log" Jan 31 15:08:42 crc kubenswrapper[4751]: I0131 15:08:42.960914 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcs7h_c83f0a10-f56b-4795-93b9-ee224d439648/extract-content/0.log" Jan 31 15:08:42 crc kubenswrapper[4751]: I0131 15:08:42.988870 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcs7h_c83f0a10-f56b-4795-93b9-ee224d439648/extract-utilities/0.log" Jan 31 15:08:42 crc kubenswrapper[4751]: I0131 15:08:42.993194 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcs7h_c83f0a10-f56b-4795-93b9-ee224d439648/extract-content/0.log" Jan 31 15:08:43 crc kubenswrapper[4751]: I0131 15:08:43.116599 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcs7h_c83f0a10-f56b-4795-93b9-ee224d439648/extract-content/0.log" Jan 31 15:08:43 crc kubenswrapper[4751]: I0131 15:08:43.136181 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcs7h_c83f0a10-f56b-4795-93b9-ee224d439648/extract-utilities/0.log" Jan 31 15:08:43 crc kubenswrapper[4751]: I0131 15:08:43.318921 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gr5gf_2eb5e3aa-17fa-49a0-a422-bc69a8a410fb/extract-utilities/0.log" Jan 31 15:08:43 crc kubenswrapper[4751]: I0131 15:08:43.508684 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gr5gf_2eb5e3aa-17fa-49a0-a422-bc69a8a410fb/extract-utilities/0.log" Jan 31 15:08:43 crc kubenswrapper[4751]: I0131 15:08:43.508726 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gr5gf_2eb5e3aa-17fa-49a0-a422-bc69a8a410fb/extract-content/0.log" Jan 31 15:08:43 crc kubenswrapper[4751]: I0131 15:08:43.540843 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gr5gf_2eb5e3aa-17fa-49a0-a422-bc69a8a410fb/extract-content/0.log" Jan 31 15:08:43 crc kubenswrapper[4751]: I0131 15:08:43.579115 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcs7h_c83f0a10-f56b-4795-93b9-ee224d439648/registry-server/0.log" Jan 31 15:08:43 crc kubenswrapper[4751]: I0131 15:08:43.685040 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gr5gf_2eb5e3aa-17fa-49a0-a422-bc69a8a410fb/extract-content/0.log" Jan 31 15:08:43 crc kubenswrapper[4751]: I0131 15:08:43.690950 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gr5gf_2eb5e3aa-17fa-49a0-a422-bc69a8a410fb/extract-utilities/0.log" Jan 31 15:08:43 crc kubenswrapper[4751]: I0131 15:08:43.849255 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jv94g_9853dd16-26f9-4fe4-9468-52d39dd4dd1f/marketplace-operator/0.log" Jan 31 15:08:43 crc kubenswrapper[4751]: I0131 15:08:43.908824 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22krg_affc293d-ac4e-49ad-be4a-bc13d7c056a7/extract-utilities/0.log" Jan 31 15:08:44 crc kubenswrapper[4751]: I0131 15:08:44.121869 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gr5gf_2eb5e3aa-17fa-49a0-a422-bc69a8a410fb/registry-server/0.log" Jan 31 15:08:44 crc kubenswrapper[4751]: I0131 15:08:44.123445 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22krg_affc293d-ac4e-49ad-be4a-bc13d7c056a7/extract-utilities/0.log" Jan 31 15:08:44 crc kubenswrapper[4751]: I0131 15:08:44.142191 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22krg_affc293d-ac4e-49ad-be4a-bc13d7c056a7/extract-content/0.log" Jan 31 15:08:44 crc kubenswrapper[4751]: I0131 15:08:44.190534 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22krg_affc293d-ac4e-49ad-be4a-bc13d7c056a7/extract-content/0.log" Jan 31 15:08:44 crc kubenswrapper[4751]: I0131 15:08:44.326128 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22krg_affc293d-ac4e-49ad-be4a-bc13d7c056a7/extract-utilities/0.log" Jan 31 15:08:44 crc kubenswrapper[4751]: I0131 15:08:44.337525 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22krg_affc293d-ac4e-49ad-be4a-bc13d7c056a7/extract-content/0.log" Jan 31 15:08:44 crc kubenswrapper[4751]: I0131 15:08:44.437569 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22krg_affc293d-ac4e-49ad-be4a-bc13d7c056a7/registry-server/0.log" Jan 31 15:08:44 crc kubenswrapper[4751]: I0131 15:08:44.531384 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-678m7_43fbbbf2-c128-46a4-9cc3-99e46c617027/extract-utilities/0.log" Jan 31 15:08:44 crc kubenswrapper[4751]: I0131 15:08:44.665916 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-678m7_43fbbbf2-c128-46a4-9cc3-99e46c617027/extract-content/0.log" Jan 31 15:08:44 crc kubenswrapper[4751]: I0131 15:08:44.670654 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-678m7_43fbbbf2-c128-46a4-9cc3-99e46c617027/extract-utilities/0.log" Jan 31 15:08:44 crc kubenswrapper[4751]: I0131 15:08:44.678799 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-678m7_43fbbbf2-c128-46a4-9cc3-99e46c617027/extract-content/0.log" Jan 31 15:08:44 crc kubenswrapper[4751]: I0131 15:08:44.866062 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-678m7_43fbbbf2-c128-46a4-9cc3-99e46c617027/extract-content/0.log" Jan 31 15:08:44 crc kubenswrapper[4751]: I0131 15:08:44.882487 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-678m7_43fbbbf2-c128-46a4-9cc3-99e46c617027/extract-utilities/0.log" Jan 31 15:08:45 crc kubenswrapper[4751]: I0131 15:08:45.297004 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-678m7_43fbbbf2-c128-46a4-9cc3-99e46c617027/registry-server/0.log" Jan 31 15:08:47 crc kubenswrapper[4751]: I0131 15:08:47.467124 4751 scope.go:117] "RemoveContainer" containerID="f05a5057693bfdfb7d9c10870add4a18c1b97e05d99f428268b3b93785058feb" Jan 31 15:08:47 crc kubenswrapper[4751]: I0131 15:08:47.487338 4751 scope.go:117] "RemoveContainer" containerID="f75375e8e6ad82f0f02e30825660a61882c0595e19792c2979a8125e9bf94686" Jan 31 15:08:47 crc kubenswrapper[4751]: I0131 15:08:47.527773 4751 scope.go:117] "RemoveContainer" containerID="79a10f8ac34beb7889999938f10a1f8fd98e243cac870f30f1dc184e88a0e786" Jan 31 15:08:47 crc kubenswrapper[4751]: I0131 15:08:47.539994 4751 scope.go:117] "RemoveContainer" containerID="2d17a7d49f7479975731597d7e17ac81d17ead2b622a0ef7093e781d499f7009" Jan 31 15:08:47 crc kubenswrapper[4751]: I0131 15:08:47.566887 4751 scope.go:117] "RemoveContainer" containerID="0e1d80ca3a8421336cb1b11f5bd0a2d183f47c5e60dedbf720f6c08836e3d291" Jan 31 15:08:47 crc kubenswrapper[4751]: I0131 15:08:47.612594 4751 scope.go:117] "RemoveContainer" containerID="748bb24fed6fe40319dbeeaf8bdfc4e48c0cf8e80d0e06626f9b2a7dd29a8843" Jan 31 15:08:47 crc kubenswrapper[4751]: I0131 15:08:47.628841 4751 scope.go:117] "RemoveContainer" containerID="a97088bc226d5155802489f7ac6a208ee9b1cacfbbb954588201d395f2a07500" Jan 31 15:08:47 crc kubenswrapper[4751]: I0131 15:08:47.649776 4751 scope.go:117] "RemoveContainer" containerID="2145d899923840c33715fa17628307294c5047421a38139dabc06cf3d05cb997" Jan 31 15:09:04 crc kubenswrapper[4751]: E0131 15:09:04.185715 4751 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 15:09:04 crc kubenswrapper[4751]: E0131 15:09:04.186222 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:11:06.186203452 +0000 UTC m=+1770.560916347 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : secret "openstack-config-secret" not found Jan 31 15:09:04 crc kubenswrapper[4751]: E0131 15:09:04.185722 4751 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 15:09:04 crc kubenswrapper[4751]: E0131 15:09:04.186374 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:11:06.186340486 +0000 UTC m=+1770.561053401 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : configmap "openstack-config" not found Jan 31 15:09:08 crc kubenswrapper[4751]: I0131 15:09:08.896737 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:09:08 crc kubenswrapper[4751]: I0131 15:09:08.897405 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:09:38 crc kubenswrapper[4751]: I0131 15:09:38.897113 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:09:38 crc kubenswrapper[4751]: I0131 15:09:38.898276 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:09:38 crc kubenswrapper[4751]: I0131 15:09:38.898373 4751 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" Jan 31 15:09:38 crc kubenswrapper[4751]: I0131 15:09:38.898828 4751 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130"} pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 31 15:09:38 crc kubenswrapper[4751]: I0131 15:09:38.898952 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" containerID="cri-o://1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" gracePeriod=600 Jan 31 15:09:39 crc kubenswrapper[4751]: E0131 15:09:39.020455 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:09:39 crc kubenswrapper[4751]: I0131 15:09:39.637364 4751 generic.go:334] "Generic (PLEG): container finished" podID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" exitCode=0 Jan 31 15:09:39 crc kubenswrapper[4751]: I0131 15:09:39.637746 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerDied","Data":"1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130"} Jan 31 15:09:39 crc kubenswrapper[4751]: I0131 15:09:39.637927 4751 scope.go:117] "RemoveContainer" containerID="89a88ddaeae8a6fe7859be79e45bc66e157a0d02a03f5daf69e0ab6320bd15be" Jan 31 15:09:39 crc kubenswrapper[4751]: I0131 15:09:39.638710 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:09:39 crc kubenswrapper[4751]: E0131 15:09:39.639109 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:09:47 crc kubenswrapper[4751]: I0131 15:09:47.733969 4751 scope.go:117] "RemoveContainer" containerID="15a7c13661f7a9dc9cca48ea38cbda46b049856ab09d05ef63e1d7c0a14b8bb5" Jan 31 15:09:47 crc kubenswrapper[4751]: I0131 15:09:47.773855 4751 scope.go:117] "RemoveContainer" containerID="25f84d0f51f45c02503d2025ee5bcd86d54fb4126f654afe8e8c27150f9da926" Jan 31 15:09:47 crc kubenswrapper[4751]: I0131 15:09:47.789474 4751 scope.go:117] "RemoveContainer" containerID="2e90cc31ff36ceaadebe3379b42c48741b099a92854117d92e72b66bca77ad69" Jan 31 15:09:47 crc kubenswrapper[4751]: I0131 15:09:47.809760 4751 scope.go:117] "RemoveContainer" containerID="04a2620fed6cde572c43eab031fe61d9c4a7478ffe007510ee4e0e1e7a876ff4" Jan 31 15:09:53 crc kubenswrapper[4751]: I0131 15:09:53.405916 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:09:53 crc kubenswrapper[4751]: E0131 15:09:53.406552 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:09:54 crc kubenswrapper[4751]: I0131 15:09:54.741736 4751 generic.go:334] "Generic (PLEG): container finished" podID="e85b3ee2-7979-400f-a052-d00fe6e44fd8" containerID="6309ccb308ea404e3c296e7d0e15d3d0347ba6fcd799c7b7cbf6ada270683b8a" exitCode=0 Jan 31 15:09:54 crc kubenswrapper[4751]: I0131 15:09:54.741813 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rzlpf/must-gather-k47wq" event={"ID":"e85b3ee2-7979-400f-a052-d00fe6e44fd8","Type":"ContainerDied","Data":"6309ccb308ea404e3c296e7d0e15d3d0347ba6fcd799c7b7cbf6ada270683b8a"} Jan 31 15:09:54 crc kubenswrapper[4751]: I0131 15:09:54.742626 4751 scope.go:117] "RemoveContainer" containerID="6309ccb308ea404e3c296e7d0e15d3d0347ba6fcd799c7b7cbf6ada270683b8a" Jan 31 15:09:55 crc kubenswrapper[4751]: I0131 15:09:55.032871 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rzlpf_must-gather-k47wq_e85b3ee2-7979-400f-a052-d00fe6e44fd8/gather/0.log" Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.066166 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rzlpf/must-gather-k47wq"] Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.066902 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-rzlpf/must-gather-k47wq" podUID="e85b3ee2-7979-400f-a052-d00fe6e44fd8" containerName="copy" containerID="cri-o://73c40052238eafbfd679d14fc1f9ec13e388944d29049da084027f407ea9e611" gracePeriod=2 Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.070153 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rzlpf/must-gather-k47wq"] Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.522131 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rzlpf_must-gather-k47wq_e85b3ee2-7979-400f-a052-d00fe6e44fd8/copy/0.log" Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.522742 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rzlpf/must-gather-k47wq" Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.687949 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4br7s\" (UniqueName: \"kubernetes.io/projected/e85b3ee2-7979-400f-a052-d00fe6e44fd8-kube-api-access-4br7s\") pod \"e85b3ee2-7979-400f-a052-d00fe6e44fd8\" (UID: \"e85b3ee2-7979-400f-a052-d00fe6e44fd8\") " Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.688052 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e85b3ee2-7979-400f-a052-d00fe6e44fd8-must-gather-output\") pod \"e85b3ee2-7979-400f-a052-d00fe6e44fd8\" (UID: \"e85b3ee2-7979-400f-a052-d00fe6e44fd8\") " Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.692794 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e85b3ee2-7979-400f-a052-d00fe6e44fd8-kube-api-access-4br7s" (OuterVolumeSpecName: "kube-api-access-4br7s") pod "e85b3ee2-7979-400f-a052-d00fe6e44fd8" (UID: "e85b3ee2-7979-400f-a052-d00fe6e44fd8"). InnerVolumeSpecName "kube-api-access-4br7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.751131 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e85b3ee2-7979-400f-a052-d00fe6e44fd8-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "e85b3ee2-7979-400f-a052-d00fe6e44fd8" (UID: "e85b3ee2-7979-400f-a052-d00fe6e44fd8"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.789741 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4br7s\" (UniqueName: \"kubernetes.io/projected/e85b3ee2-7979-400f-a052-d00fe6e44fd8-kube-api-access-4br7s\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.789777 4751 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/e85b3ee2-7979-400f-a052-d00fe6e44fd8-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.799803 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rzlpf_must-gather-k47wq_e85b3ee2-7979-400f-a052-d00fe6e44fd8/copy/0.log" Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.800220 4751 generic.go:334] "Generic (PLEG): container finished" podID="e85b3ee2-7979-400f-a052-d00fe6e44fd8" containerID="73c40052238eafbfd679d14fc1f9ec13e388944d29049da084027f407ea9e611" exitCode=143 Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.800269 4751 scope.go:117] "RemoveContainer" containerID="73c40052238eafbfd679d14fc1f9ec13e388944d29049da084027f407ea9e611" Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.800273 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rzlpf/must-gather-k47wq" Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.814058 4751 scope.go:117] "RemoveContainer" containerID="6309ccb308ea404e3c296e7d0e15d3d0347ba6fcd799c7b7cbf6ada270683b8a" Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.863593 4751 scope.go:117] "RemoveContainer" containerID="73c40052238eafbfd679d14fc1f9ec13e388944d29049da084027f407ea9e611" Jan 31 15:10:02 crc kubenswrapper[4751]: E0131 15:10:02.864051 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"73c40052238eafbfd679d14fc1f9ec13e388944d29049da084027f407ea9e611\": container with ID starting with 73c40052238eafbfd679d14fc1f9ec13e388944d29049da084027f407ea9e611 not found: ID does not exist" containerID="73c40052238eafbfd679d14fc1f9ec13e388944d29049da084027f407ea9e611" Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.864099 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"73c40052238eafbfd679d14fc1f9ec13e388944d29049da084027f407ea9e611"} err="failed to get container status \"73c40052238eafbfd679d14fc1f9ec13e388944d29049da084027f407ea9e611\": rpc error: code = NotFound desc = could not find container \"73c40052238eafbfd679d14fc1f9ec13e388944d29049da084027f407ea9e611\": container with ID starting with 73c40052238eafbfd679d14fc1f9ec13e388944d29049da084027f407ea9e611 not found: ID does not exist" Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.864125 4751 scope.go:117] "RemoveContainer" containerID="6309ccb308ea404e3c296e7d0e15d3d0347ba6fcd799c7b7cbf6ada270683b8a" Jan 31 15:10:02 crc kubenswrapper[4751]: E0131 15:10:02.864812 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6309ccb308ea404e3c296e7d0e15d3d0347ba6fcd799c7b7cbf6ada270683b8a\": container with ID starting with 6309ccb308ea404e3c296e7d0e15d3d0347ba6fcd799c7b7cbf6ada270683b8a not found: ID does not exist" containerID="6309ccb308ea404e3c296e7d0e15d3d0347ba6fcd799c7b7cbf6ada270683b8a" Jan 31 15:10:02 crc kubenswrapper[4751]: I0131 15:10:02.864837 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6309ccb308ea404e3c296e7d0e15d3d0347ba6fcd799c7b7cbf6ada270683b8a"} err="failed to get container status \"6309ccb308ea404e3c296e7d0e15d3d0347ba6fcd799c7b7cbf6ada270683b8a\": rpc error: code = NotFound desc = could not find container \"6309ccb308ea404e3c296e7d0e15d3d0347ba6fcd799c7b7cbf6ada270683b8a\": container with ID starting with 6309ccb308ea404e3c296e7d0e15d3d0347ba6fcd799c7b7cbf6ada270683b8a not found: ID does not exist" Jan 31 15:10:04 crc kubenswrapper[4751]: I0131 15:10:04.414706 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e85b3ee2-7979-400f-a052-d00fe6e44fd8" path="/var/lib/kubelet/pods/e85b3ee2-7979-400f-a052-d00fe6e44fd8/volumes" Jan 31 15:10:05 crc kubenswrapper[4751]: I0131 15:10:05.405949 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:10:05 crc kubenswrapper[4751]: E0131 15:10:05.406220 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:10:20 crc kubenswrapper[4751]: I0131 15:10:20.405858 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:10:20 crc kubenswrapper[4751]: E0131 15:10:20.407137 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:10:31 crc kubenswrapper[4751]: I0131 15:10:31.407320 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:10:31 crc kubenswrapper[4751]: E0131 15:10:31.408409 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:10:43 crc kubenswrapper[4751]: I0131 15:10:43.406439 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:10:43 crc kubenswrapper[4751]: E0131 15:10:43.407305 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:10:47 crc kubenswrapper[4751]: I0131 15:10:47.893165 4751 scope.go:117] "RemoveContainer" containerID="77d9f01225cc43eac33fe40d8bc014694a35ae20a7b11d1e4c070bd741ce303a" Jan 31 15:10:47 crc kubenswrapper[4751]: I0131 15:10:47.931405 4751 scope.go:117] "RemoveContainer" containerID="c486e82ff06dabf3bbaf584cc05f4bf167ea45034bb1b4f577adb93e884d0e62" Jan 31 15:10:47 crc kubenswrapper[4751]: I0131 15:10:47.964268 4751 scope.go:117] "RemoveContainer" containerID="a87d6cd135483f11e653acd5122adb7f7e32f94e7051f9157fa4ae04850a4813" Jan 31 15:10:58 crc kubenswrapper[4751]: I0131 15:10:58.406308 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:10:58 crc kubenswrapper[4751]: E0131 15:10:58.407619 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:11:06 crc kubenswrapper[4751]: E0131 15:11:06.186667 4751 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 15:11:06 crc kubenswrapper[4751]: E0131 15:11:06.186702 4751 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 15:11:06 crc kubenswrapper[4751]: E0131 15:11:06.186991 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:13:08.186969408 +0000 UTC m=+1892.561682323 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : configmap "openstack-config" not found Jan 31 15:11:06 crc kubenswrapper[4751]: E0131 15:11:06.187174 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:13:08.187125332 +0000 UTC m=+1892.561838217 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : secret "openstack-config-secret" not found Jan 31 15:11:12 crc kubenswrapper[4751]: I0131 15:11:12.406062 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:11:12 crc kubenswrapper[4751]: E0131 15:11:12.406843 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:11:24 crc kubenswrapper[4751]: I0131 15:11:24.406778 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:11:24 crc kubenswrapper[4751]: E0131 15:11:24.407775 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:11:38 crc kubenswrapper[4751]: I0131 15:11:38.407401 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:11:38 crc kubenswrapper[4751]: E0131 15:11:38.408558 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:11:48 crc kubenswrapper[4751]: I0131 15:11:48.032395 4751 scope.go:117] "RemoveContainer" containerID="8c9ca246c6d8d22550b0a337fe277ae3824da506216668e0ce9b2ebcd4cee908" Jan 31 15:11:52 crc kubenswrapper[4751]: I0131 15:11:52.406369 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:11:52 crc kubenswrapper[4751]: E0131 15:11:52.407010 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:12:06 crc kubenswrapper[4751]: I0131 15:12:06.413644 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:12:06 crc kubenswrapper[4751]: E0131 15:12:06.415357 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:12:21 crc kubenswrapper[4751]: I0131 15:12:21.406505 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:12:21 crc kubenswrapper[4751]: E0131 15:12:21.407469 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.845434 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-n5n7n/must-gather-kfmqg"] Jan 31 15:12:30 crc kubenswrapper[4751]: E0131 15:12:30.846199 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a49e54-556d-487d-8cdf-3fd3dc9442a5" containerName="extract-content" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.846214 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a49e54-556d-487d-8cdf-3fd3dc9442a5" containerName="extract-content" Jan 31 15:12:30 crc kubenswrapper[4751]: E0131 15:12:30.846231 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a49e54-556d-487d-8cdf-3fd3dc9442a5" containerName="extract-utilities" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.846238 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a49e54-556d-487d-8cdf-3fd3dc9442a5" containerName="extract-utilities" Jan 31 15:12:30 crc kubenswrapper[4751]: E0131 15:12:30.846248 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="50a49e54-556d-487d-8cdf-3fd3dc9442a5" containerName="registry-server" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.846257 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="50a49e54-556d-487d-8cdf-3fd3dc9442a5" containerName="registry-server" Jan 31 15:12:30 crc kubenswrapper[4751]: E0131 15:12:30.846266 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca5a5a5e-fdc7-409c-b452-44b84779eba2" containerName="extract-utilities" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.846271 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca5a5a5e-fdc7-409c-b452-44b84779eba2" containerName="extract-utilities" Jan 31 15:12:30 crc kubenswrapper[4751]: E0131 15:12:30.846282 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca5a5a5e-fdc7-409c-b452-44b84779eba2" containerName="extract-content" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.846288 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca5a5a5e-fdc7-409c-b452-44b84779eba2" containerName="extract-content" Jan 31 15:12:30 crc kubenswrapper[4751]: E0131 15:12:30.846296 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ca5a5a5e-fdc7-409c-b452-44b84779eba2" containerName="registry-server" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.846302 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ca5a5a5e-fdc7-409c-b452-44b84779eba2" containerName="registry-server" Jan 31 15:12:30 crc kubenswrapper[4751]: E0131 15:12:30.846316 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85b3ee2-7979-400f-a052-d00fe6e44fd8" containerName="gather" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.846321 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85b3ee2-7979-400f-a052-d00fe6e44fd8" containerName="gather" Jan 31 15:12:30 crc kubenswrapper[4751]: E0131 15:12:30.846331 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e85b3ee2-7979-400f-a052-d00fe6e44fd8" containerName="copy" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.846338 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="e85b3ee2-7979-400f-a052-d00fe6e44fd8" containerName="copy" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.846430 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="50a49e54-556d-487d-8cdf-3fd3dc9442a5" containerName="registry-server" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.846441 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85b3ee2-7979-400f-a052-d00fe6e44fd8" containerName="copy" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.846449 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ca5a5a5e-fdc7-409c-b452-44b84779eba2" containerName="registry-server" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.846456 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="e85b3ee2-7979-400f-a052-d00fe6e44fd8" containerName="gather" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.847001 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n5n7n/must-gather-kfmqg" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.849621 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-n5n7n"/"kube-root-ca.crt" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.850942 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-n5n7n"/"openshift-service-ca.crt" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.857294 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n5n7n/must-gather-kfmqg"] Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.940348 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8169bcf4-4b12-4458-af34-08e57ab8e72a-must-gather-output\") pod \"must-gather-kfmqg\" (UID: \"8169bcf4-4b12-4458-af34-08e57ab8e72a\") " pod="openshift-must-gather-n5n7n/must-gather-kfmqg" Jan 31 15:12:30 crc kubenswrapper[4751]: I0131 15:12:30.940543 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fh4fz\" (UniqueName: \"kubernetes.io/projected/8169bcf4-4b12-4458-af34-08e57ab8e72a-kube-api-access-fh4fz\") pod \"must-gather-kfmqg\" (UID: \"8169bcf4-4b12-4458-af34-08e57ab8e72a\") " pod="openshift-must-gather-n5n7n/must-gather-kfmqg" Jan 31 15:12:31 crc kubenswrapper[4751]: I0131 15:12:31.042125 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fh4fz\" (UniqueName: \"kubernetes.io/projected/8169bcf4-4b12-4458-af34-08e57ab8e72a-kube-api-access-fh4fz\") pod \"must-gather-kfmqg\" (UID: \"8169bcf4-4b12-4458-af34-08e57ab8e72a\") " pod="openshift-must-gather-n5n7n/must-gather-kfmqg" Jan 31 15:12:31 crc kubenswrapper[4751]: I0131 15:12:31.042498 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8169bcf4-4b12-4458-af34-08e57ab8e72a-must-gather-output\") pod \"must-gather-kfmqg\" (UID: \"8169bcf4-4b12-4458-af34-08e57ab8e72a\") " pod="openshift-must-gather-n5n7n/must-gather-kfmqg" Jan 31 15:12:31 crc kubenswrapper[4751]: I0131 15:12:31.042938 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8169bcf4-4b12-4458-af34-08e57ab8e72a-must-gather-output\") pod \"must-gather-kfmqg\" (UID: \"8169bcf4-4b12-4458-af34-08e57ab8e72a\") " pod="openshift-must-gather-n5n7n/must-gather-kfmqg" Jan 31 15:12:31 crc kubenswrapper[4751]: I0131 15:12:31.061947 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fh4fz\" (UniqueName: \"kubernetes.io/projected/8169bcf4-4b12-4458-af34-08e57ab8e72a-kube-api-access-fh4fz\") pod \"must-gather-kfmqg\" (UID: \"8169bcf4-4b12-4458-af34-08e57ab8e72a\") " pod="openshift-must-gather-n5n7n/must-gather-kfmqg" Jan 31 15:12:31 crc kubenswrapper[4751]: I0131 15:12:31.170004 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n5n7n/must-gather-kfmqg" Jan 31 15:12:31 crc kubenswrapper[4751]: I0131 15:12:31.548952 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-n5n7n/must-gather-kfmqg"] Jan 31 15:12:32 crc kubenswrapper[4751]: I0131 15:12:32.011873 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n5n7n/must-gather-kfmqg" event={"ID":"8169bcf4-4b12-4458-af34-08e57ab8e72a","Type":"ContainerStarted","Data":"d92eb14a52593d6fc22bb08aec8df39b0e7882a4fa59d23aac4f2c37f671f001"} Jan 31 15:12:32 crc kubenswrapper[4751]: I0131 15:12:32.011926 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n5n7n/must-gather-kfmqg" event={"ID":"8169bcf4-4b12-4458-af34-08e57ab8e72a","Type":"ContainerStarted","Data":"cc356eac62cc8a9dc1d2dd948da59d8ed49fe19d58d0b369f5e6af812f05dc08"} Jan 31 15:12:32 crc kubenswrapper[4751]: I0131 15:12:32.011941 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n5n7n/must-gather-kfmqg" event={"ID":"8169bcf4-4b12-4458-af34-08e57ab8e72a","Type":"ContainerStarted","Data":"f5a96fe58f3e9aa570743df67af39e2d59170e23b6208375ca96cb810eba42fc"} Jan 31 15:12:32 crc kubenswrapper[4751]: I0131 15:12:32.026996 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-n5n7n/must-gather-kfmqg" podStartSLOduration=2.026979815 podStartE2EDuration="2.026979815s" podCreationTimestamp="2026-01-31 15:12:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:12:32.024014725 +0000 UTC m=+1856.398727610" watchObservedRunningTime="2026-01-31 15:12:32.026979815 +0000 UTC m=+1856.401692700" Jan 31 15:12:33 crc kubenswrapper[4751]: I0131 15:12:33.406342 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:12:33 crc kubenswrapper[4751]: E0131 15:12:33.406848 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:12:47 crc kubenswrapper[4751]: I0131 15:12:47.405879 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:12:47 crc kubenswrapper[4751]: E0131 15:12:47.406620 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:12:48 crc kubenswrapper[4751]: I0131 15:12:48.090722 4751 scope.go:117] "RemoveContainer" containerID="d083c8910ad51529e38c062770d9b1a45be20502e7c76553d0090ac7a9898be5" Jan 31 15:12:48 crc kubenswrapper[4751]: I0131 15:12:48.108086 4751 scope.go:117] "RemoveContainer" containerID="b5cb3ee4032129b568b4ee0fa56e2f13d4d48986ad6a3c19ca00fa7b56b0e716" Jan 31 15:12:59 crc kubenswrapper[4751]: I0131 15:12:59.405269 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:12:59 crc kubenswrapper[4751]: E0131 15:12:59.405722 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:13:08 crc kubenswrapper[4751]: E0131 15:13:08.240750 4751 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 15:13:08 crc kubenswrapper[4751]: E0131 15:13:08.241333 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:15:10.241315453 +0000 UTC m=+2014.616028338 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : configmap "openstack-config" not found Jan 31 15:13:08 crc kubenswrapper[4751]: E0131 15:13:08.240816 4751 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 15:13:08 crc kubenswrapper[4751]: E0131 15:13:08.241443 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:15:10.241423386 +0000 UTC m=+2014.616136271 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : secret "openstack-config-secret" not found Jan 31 15:13:08 crc kubenswrapper[4751]: I0131 15:13:08.297379 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f_eec59a88-8f4d-4482-aa2a-11a508cc3a79/util/0.log" Jan 31 15:13:08 crc kubenswrapper[4751]: I0131 15:13:08.401050 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f_eec59a88-8f4d-4482-aa2a-11a508cc3a79/pull/0.log" Jan 31 15:13:08 crc kubenswrapper[4751]: I0131 15:13:08.419250 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f_eec59a88-8f4d-4482-aa2a-11a508cc3a79/util/0.log" Jan 31 15:13:08 crc kubenswrapper[4751]: I0131 15:13:08.457584 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f_eec59a88-8f4d-4482-aa2a-11a508cc3a79/pull/0.log" Jan 31 15:13:08 crc kubenswrapper[4751]: I0131 15:13:08.754394 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f_eec59a88-8f4d-4482-aa2a-11a508cc3a79/pull/0.log" Jan 31 15:13:08 crc kubenswrapper[4751]: I0131 15:13:08.759544 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f_eec59a88-8f4d-4482-aa2a-11a508cc3a79/util/0.log" Jan 31 15:13:08 crc kubenswrapper[4751]: I0131 15:13:08.820153 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_920b3933541dd54eb27cdc8c5dcad58318a776ec0e7a3ec14a5289a926c857f_eec59a88-8f4d-4482-aa2a-11a508cc3a79/extract/0.log" Jan 31 15:13:08 crc kubenswrapper[4751]: I0131 15:13:08.911159 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-controller-manager-6d4cb5b58-r8xn7_91cc4333-403a-4ce4-a347-8b475ad0169a/manager/0.log" Jan 31 15:13:08 crc kubenswrapper[4751]: I0131 15:13:08.985341 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openstack-operators_horizon-operator-index-vjs56_95bedc09-cab6-4e6b-a210-8cb1f8b39601/registry-server/0.log" Jan 31 15:13:12 crc kubenswrapper[4751]: I0131 15:13:12.406394 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:13:12 crc kubenswrapper[4751]: E0131 15:13:12.406941 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:13:22 crc kubenswrapper[4751]: I0131 15:13:22.256876 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-h4drr_5c630253-f658-44fb-891d-f560f1e2b577/control-plane-machine-set-operator/0.log" Jan 31 15:13:22 crc kubenswrapper[4751]: I0131 15:13:22.430371 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4gqrl_bcd7a932-6db9-4cca-b619-852242324725/kube-rbac-proxy/0.log" Jan 31 15:13:22 crc kubenswrapper[4751]: I0131 15:13:22.433770 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-4gqrl_bcd7a932-6db9-4cca-b619-852242324725/machine-api-operator/0.log" Jan 31 15:13:23 crc kubenswrapper[4751]: I0131 15:13:23.406290 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:13:23 crc kubenswrapper[4751]: E0131 15:13:23.406532 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:13:35 crc kubenswrapper[4751]: I0131 15:13:35.405964 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:13:35 crc kubenswrapper[4751]: E0131 15:13:35.406811 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:13:48 crc kubenswrapper[4751]: I0131 15:13:48.405761 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:13:48 crc kubenswrapper[4751]: E0131 15:13:48.406454 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:13:50 crc kubenswrapper[4751]: I0131 15:13:50.817553 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6dhf9_6b667c31-e911-496a-9c8b-12c906e724ec/kube-rbac-proxy/0.log" Jan 31 15:13:50 crc kubenswrapper[4751]: I0131 15:13:50.843221 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_controller-6968d8fdc4-6dhf9_6b667c31-e911-496a-9c8b-12c906e724ec/controller/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.014418 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-frr-files/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.162193 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-frr-files/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.174540 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-reloader/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.189698 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-metrics/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.203887 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-reloader/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.370927 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-frr-files/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.385035 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-metrics/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.385642 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-reloader/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.411893 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-metrics/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.596786 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-frr-files/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.620021 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-metrics/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.625100 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/controller/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.625816 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/cp-reloader/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.788788 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/frr-metrics/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.822583 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/kube-rbac-proxy/0.log" Jan 31 15:13:51 crc kubenswrapper[4751]: I0131 15:13:51.879885 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/kube-rbac-proxy-frr/0.log" Jan 31 15:13:52 crc kubenswrapper[4751]: I0131 15:13:52.004939 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/reloader/0.log" Jan 31 15:13:52 crc kubenswrapper[4751]: I0131 15:13:52.063293 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-webhook-server-7df86c4f6c-qf86j_94655b12-be6a-4043-8f7c-80d1b7fb1a2f/frr-k8s-webhook-server/0.log" Jan 31 15:13:52 crc kubenswrapper[4751]: I0131 15:13:52.341135 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-controller-manager-6b999687d7-vf7mn_bd60e998-83e4-442a-98ac-c4e33d4b4765/manager/0.log" Jan 31 15:13:52 crc kubenswrapper[4751]: I0131 15:13:52.352741 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_metallb-operator-webhook-server-5c46dd7d46-8xt78_01320eb9-ccb5-4593-866a-f49553fa7262/webhook-server/0.log" Jan 31 15:13:52 crc kubenswrapper[4751]: I0131 15:13:52.490255 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_frr-k8s-9z9n2_b1f214e9-14db-462f-900c-3652ec7908e5/frr/0.log" Jan 31 15:13:52 crc kubenswrapper[4751]: I0131 15:13:52.531679 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qv6gh_7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc/kube-rbac-proxy/0.log" Jan 31 15:13:52 crc kubenswrapper[4751]: I0131 15:13:52.745205 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/metallb-system_speaker-qv6gh_7e8b57b9-8a0b-4ef2-83c9-85f9e60c9adc/speaker/0.log" Jan 31 15:14:02 crc kubenswrapper[4751]: I0131 15:14:02.406449 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:14:02 crc kubenswrapper[4751]: E0131 15:14:02.407585 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:14:05 crc kubenswrapper[4751]: I0131 15:14:05.057545 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/glance-kuttl-tests_openstackclient_eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb/openstackclient/0.log" Jan 31 15:14:14 crc kubenswrapper[4751]: I0131 15:14:14.406187 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:14:14 crc kubenswrapper[4751]: E0131 15:14:14.407144 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:14:17 crc kubenswrapper[4751]: I0131 15:14:17.399137 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz_f3380dc7-49d9-4d61-a0bb-003c1c5e2742/util/0.log" Jan 31 15:14:17 crc kubenswrapper[4751]: I0131 15:14:17.517935 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz_f3380dc7-49d9-4d61-a0bb-003c1c5e2742/util/0.log" Jan 31 15:14:17 crc kubenswrapper[4751]: I0131 15:14:17.547593 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz_f3380dc7-49d9-4d61-a0bb-003c1c5e2742/pull/0.log" Jan 31 15:14:17 crc kubenswrapper[4751]: I0131 15:14:17.591112 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz_f3380dc7-49d9-4d61-a0bb-003c1c5e2742/pull/0.log" Jan 31 15:14:17 crc kubenswrapper[4751]: I0131 15:14:17.718450 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz_f3380dc7-49d9-4d61-a0bb-003c1c5e2742/util/0.log" Jan 31 15:14:17 crc kubenswrapper[4751]: I0131 15:14:17.734058 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz_f3380dc7-49d9-4d61-a0bb-003c1c5e2742/extract/0.log" Jan 31 15:14:17 crc kubenswrapper[4751]: I0131 15:14:17.765494 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_270996307cd21d144be796860235064b5127c2fcf62ccccd6689c259dczsbcz_f3380dc7-49d9-4d61-a0bb-003c1c5e2742/pull/0.log" Jan 31 15:14:17 crc kubenswrapper[4751]: I0131 15:14:17.860267 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcs7h_c83f0a10-f56b-4795-93b9-ee224d439648/extract-utilities/0.log" Jan 31 15:14:18 crc kubenswrapper[4751]: I0131 15:14:18.035033 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcs7h_c83f0a10-f56b-4795-93b9-ee224d439648/extract-utilities/0.log" Jan 31 15:14:18 crc kubenswrapper[4751]: I0131 15:14:18.073625 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcs7h_c83f0a10-f56b-4795-93b9-ee224d439648/extract-content/0.log" Jan 31 15:14:18 crc kubenswrapper[4751]: I0131 15:14:18.079821 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcs7h_c83f0a10-f56b-4795-93b9-ee224d439648/extract-content/0.log" Jan 31 15:14:18 crc kubenswrapper[4751]: I0131 15:14:18.225662 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcs7h_c83f0a10-f56b-4795-93b9-ee224d439648/extract-content/0.log" Jan 31 15:14:18 crc kubenswrapper[4751]: I0131 15:14:18.233743 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcs7h_c83f0a10-f56b-4795-93b9-ee224d439648/extract-utilities/0.log" Jan 31 15:14:18 crc kubenswrapper[4751]: I0131 15:14:18.432275 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gr5gf_2eb5e3aa-17fa-49a0-a422-bc69a8a410fb/extract-utilities/0.log" Jan 31 15:14:18 crc kubenswrapper[4751]: I0131 15:14:18.587149 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gr5gf_2eb5e3aa-17fa-49a0-a422-bc69a8a410fb/extract-utilities/0.log" Jan 31 15:14:18 crc kubenswrapper[4751]: I0131 15:14:18.597775 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gr5gf_2eb5e3aa-17fa-49a0-a422-bc69a8a410fb/extract-content/0.log" Jan 31 15:14:18 crc kubenswrapper[4751]: I0131 15:14:18.639435 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gr5gf_2eb5e3aa-17fa-49a0-a422-bc69a8a410fb/extract-content/0.log" Jan 31 15:14:18 crc kubenswrapper[4751]: I0131 15:14:18.678347 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-qcs7h_c83f0a10-f56b-4795-93b9-ee224d439648/registry-server/0.log" Jan 31 15:14:18 crc kubenswrapper[4751]: I0131 15:14:18.806319 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gr5gf_2eb5e3aa-17fa-49a0-a422-bc69a8a410fb/extract-utilities/0.log" Jan 31 15:14:18 crc kubenswrapper[4751]: I0131 15:14:18.818451 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gr5gf_2eb5e3aa-17fa-49a0-a422-bc69a8a410fb/extract-content/0.log" Jan 31 15:14:19 crc kubenswrapper[4751]: I0131 15:14:19.056683 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-jv94g_9853dd16-26f9-4fe4-9468-52d39dd4dd1f/marketplace-operator/0.log" Jan 31 15:14:19 crc kubenswrapper[4751]: I0131 15:14:19.086777 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22krg_affc293d-ac4e-49ad-be4a-bc13d7c056a7/extract-utilities/0.log" Jan 31 15:14:19 crc kubenswrapper[4751]: I0131 15:14:19.190768 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-gr5gf_2eb5e3aa-17fa-49a0-a422-bc69a8a410fb/registry-server/0.log" Jan 31 15:14:19 crc kubenswrapper[4751]: I0131 15:14:19.319682 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22krg_affc293d-ac4e-49ad-be4a-bc13d7c056a7/extract-utilities/0.log" Jan 31 15:14:19 crc kubenswrapper[4751]: I0131 15:14:19.327166 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22krg_affc293d-ac4e-49ad-be4a-bc13d7c056a7/extract-content/0.log" Jan 31 15:14:19 crc kubenswrapper[4751]: I0131 15:14:19.345811 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22krg_affc293d-ac4e-49ad-be4a-bc13d7c056a7/extract-content/0.log" Jan 31 15:14:19 crc kubenswrapper[4751]: I0131 15:14:19.469335 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22krg_affc293d-ac4e-49ad-be4a-bc13d7c056a7/extract-utilities/0.log" Jan 31 15:14:19 crc kubenswrapper[4751]: I0131 15:14:19.470620 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22krg_affc293d-ac4e-49ad-be4a-bc13d7c056a7/extract-content/0.log" Jan 31 15:14:19 crc kubenswrapper[4751]: I0131 15:14:19.597076 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-marketplace-22krg_affc293d-ac4e-49ad-be4a-bc13d7c056a7/registry-server/0.log" Jan 31 15:14:19 crc kubenswrapper[4751]: I0131 15:14:19.640924 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-678m7_43fbbbf2-c128-46a4-9cc3-99e46c617027/extract-utilities/0.log" Jan 31 15:14:19 crc kubenswrapper[4751]: I0131 15:14:19.809625 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-678m7_43fbbbf2-c128-46a4-9cc3-99e46c617027/extract-content/0.log" Jan 31 15:14:19 crc kubenswrapper[4751]: I0131 15:14:19.816107 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-678m7_43fbbbf2-c128-46a4-9cc3-99e46c617027/extract-utilities/0.log" Jan 31 15:14:19 crc kubenswrapper[4751]: I0131 15:14:19.859429 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-678m7_43fbbbf2-c128-46a4-9cc3-99e46c617027/extract-content/0.log" Jan 31 15:14:19 crc kubenswrapper[4751]: I0131 15:14:19.953861 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-678m7_43fbbbf2-c128-46a4-9cc3-99e46c617027/extract-utilities/0.log" Jan 31 15:14:20 crc kubenswrapper[4751]: I0131 15:14:20.019199 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-678m7_43fbbbf2-c128-46a4-9cc3-99e46c617027/extract-content/0.log" Jan 31 15:14:20 crc kubenswrapper[4751]: I0131 15:14:20.521278 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-678m7_43fbbbf2-c128-46a4-9cc3-99e46c617027/registry-server/0.log" Jan 31 15:14:28 crc kubenswrapper[4751]: I0131 15:14:28.406427 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:14:28 crc kubenswrapper[4751]: E0131 15:14:28.407183 4751 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-2wpj7_openshift-machine-config-operator(b4c170e8-22c9-43a9-8b34-9d626c2ccddc)\"" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" Jan 31 15:14:43 crc kubenswrapper[4751]: I0131 15:14:43.405741 4751 scope.go:117] "RemoveContainer" containerID="1e91cdb1164832b2457470e9c5a6b63801cdbe4c69db7c3369929241eb60a130" Jan 31 15:14:43 crc kubenswrapper[4751]: I0131 15:14:43.814711 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" event={"ID":"b4c170e8-22c9-43a9-8b34-9d626c2ccddc","Type":"ContainerStarted","Data":"1255c884133e00fc9c5d808129089de90e3ff1b6af74e3a15a0350ae021f2f6b"} Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.152677 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x"] Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.154626 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.160381 4751 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.160437 4751 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.161651 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x"] Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.322154 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce076099-f84a-49c6-9566-cae17c8efd6d-config-volume\") pod \"collect-profiles-29497875-rng7x\" (UID: \"ce076099-f84a-49c6-9566-cae17c8efd6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.322208 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j6kqz\" (UniqueName: \"kubernetes.io/projected/ce076099-f84a-49c6-9566-cae17c8efd6d-kube-api-access-j6kqz\") pod \"collect-profiles-29497875-rng7x\" (UID: \"ce076099-f84a-49c6-9566-cae17c8efd6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.322290 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce076099-f84a-49c6-9566-cae17c8efd6d-secret-volume\") pod \"collect-profiles-29497875-rng7x\" (UID: \"ce076099-f84a-49c6-9566-cae17c8efd6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.423662 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce076099-f84a-49c6-9566-cae17c8efd6d-secret-volume\") pod \"collect-profiles-29497875-rng7x\" (UID: \"ce076099-f84a-49c6-9566-cae17c8efd6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.423779 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce076099-f84a-49c6-9566-cae17c8efd6d-config-volume\") pod \"collect-profiles-29497875-rng7x\" (UID: \"ce076099-f84a-49c6-9566-cae17c8efd6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.423800 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j6kqz\" (UniqueName: \"kubernetes.io/projected/ce076099-f84a-49c6-9566-cae17c8efd6d-kube-api-access-j6kqz\") pod \"collect-profiles-29497875-rng7x\" (UID: \"ce076099-f84a-49c6-9566-cae17c8efd6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.425651 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce076099-f84a-49c6-9566-cae17c8efd6d-config-volume\") pod \"collect-profiles-29497875-rng7x\" (UID: \"ce076099-f84a-49c6-9566-cae17c8efd6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.443906 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce076099-f84a-49c6-9566-cae17c8efd6d-secret-volume\") pod \"collect-profiles-29497875-rng7x\" (UID: \"ce076099-f84a-49c6-9566-cae17c8efd6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.464267 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j6kqz\" (UniqueName: \"kubernetes.io/projected/ce076099-f84a-49c6-9566-cae17c8efd6d-kube-api-access-j6kqz\") pod \"collect-profiles-29497875-rng7x\" (UID: \"ce076099-f84a-49c6-9566-cae17c8efd6d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.479110 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.730420 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x"] Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.935650 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" event={"ID":"ce076099-f84a-49c6-9566-cae17c8efd6d","Type":"ContainerStarted","Data":"6779d327b09eddc653cf10f40050a18611bafc3b2fcb0394ad4d3ee6ba27c365"} Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.935882 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" event={"ID":"ce076099-f84a-49c6-9566-cae17c8efd6d","Type":"ContainerStarted","Data":"ce54e21704ea948301d201e015fb2c4d5600e26e25a29f067f6f22ad0e132991"} Jan 31 15:15:00 crc kubenswrapper[4751]: I0131 15:15:00.956481 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" podStartSLOduration=0.956461328 podStartE2EDuration="956.461328ms" podCreationTimestamp="2026-01-31 15:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-31 15:15:00.955472112 +0000 UTC m=+2005.330184997" watchObservedRunningTime="2026-01-31 15:15:00.956461328 +0000 UTC m=+2005.331174213" Jan 31 15:15:01 crc kubenswrapper[4751]: I0131 15:15:01.942964 4751 generic.go:334] "Generic (PLEG): container finished" podID="ce076099-f84a-49c6-9566-cae17c8efd6d" containerID="6779d327b09eddc653cf10f40050a18611bafc3b2fcb0394ad4d3ee6ba27c365" exitCode=0 Jan 31 15:15:01 crc kubenswrapper[4751]: I0131 15:15:01.943119 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" event={"ID":"ce076099-f84a-49c6-9566-cae17c8efd6d","Type":"ContainerDied","Data":"6779d327b09eddc653cf10f40050a18611bafc3b2fcb0394ad4d3ee6ba27c365"} Jan 31 15:15:03 crc kubenswrapper[4751]: I0131 15:15:03.174636 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" Jan 31 15:15:03 crc kubenswrapper[4751]: I0131 15:15:03.363753 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce076099-f84a-49c6-9566-cae17c8efd6d-config-volume\") pod \"ce076099-f84a-49c6-9566-cae17c8efd6d\" (UID: \"ce076099-f84a-49c6-9566-cae17c8efd6d\") " Jan 31 15:15:03 crc kubenswrapper[4751]: I0131 15:15:03.364035 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce076099-f84a-49c6-9566-cae17c8efd6d-secret-volume\") pod \"ce076099-f84a-49c6-9566-cae17c8efd6d\" (UID: \"ce076099-f84a-49c6-9566-cae17c8efd6d\") " Jan 31 15:15:03 crc kubenswrapper[4751]: I0131 15:15:03.364355 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce076099-f84a-49c6-9566-cae17c8efd6d-config-volume" (OuterVolumeSpecName: "config-volume") pod "ce076099-f84a-49c6-9566-cae17c8efd6d" (UID: "ce076099-f84a-49c6-9566-cae17c8efd6d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 31 15:15:03 crc kubenswrapper[4751]: I0131 15:15:03.365132 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j6kqz\" (UniqueName: \"kubernetes.io/projected/ce076099-f84a-49c6-9566-cae17c8efd6d-kube-api-access-j6kqz\") pod \"ce076099-f84a-49c6-9566-cae17c8efd6d\" (UID: \"ce076099-f84a-49c6-9566-cae17c8efd6d\") " Jan 31 15:15:03 crc kubenswrapper[4751]: I0131 15:15:03.365520 4751 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/ce076099-f84a-49c6-9566-cae17c8efd6d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:03 crc kubenswrapper[4751]: I0131 15:15:03.369141 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce076099-f84a-49c6-9566-cae17c8efd6d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "ce076099-f84a-49c6-9566-cae17c8efd6d" (UID: "ce076099-f84a-49c6-9566-cae17c8efd6d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 31 15:15:03 crc kubenswrapper[4751]: I0131 15:15:03.370091 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce076099-f84a-49c6-9566-cae17c8efd6d-kube-api-access-j6kqz" (OuterVolumeSpecName: "kube-api-access-j6kqz") pod "ce076099-f84a-49c6-9566-cae17c8efd6d" (UID: "ce076099-f84a-49c6-9566-cae17c8efd6d"). InnerVolumeSpecName "kube-api-access-j6kqz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:03 crc kubenswrapper[4751]: I0131 15:15:03.467137 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j6kqz\" (UniqueName: \"kubernetes.io/projected/ce076099-f84a-49c6-9566-cae17c8efd6d-kube-api-access-j6kqz\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:03 crc kubenswrapper[4751]: I0131 15:15:03.467179 4751 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/ce076099-f84a-49c6-9566-cae17c8efd6d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:03 crc kubenswrapper[4751]: I0131 15:15:03.958352 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" event={"ID":"ce076099-f84a-49c6-9566-cae17c8efd6d","Type":"ContainerDied","Data":"ce54e21704ea948301d201e015fb2c4d5600e26e25a29f067f6f22ad0e132991"} Jan 31 15:15:03 crc kubenswrapper[4751]: I0131 15:15:03.958391 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29497875-rng7x" Jan 31 15:15:03 crc kubenswrapper[4751]: I0131 15:15:03.958411 4751 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce54e21704ea948301d201e015fb2c4d5600e26e25a29f067f6f22ad0e132991" Jan 31 15:15:04 crc kubenswrapper[4751]: I0131 15:15:04.255169 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp"] Jan 31 15:15:04 crc kubenswrapper[4751]: I0131 15:15:04.262916 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29497830-rpwmp"] Jan 31 15:15:04 crc kubenswrapper[4751]: I0131 15:15:04.420454 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eade01dc-846b-42a8-a6ed-8cf0a0663e82" path="/var/lib/kubelet/pods/eade01dc-846b-42a8-a6ed-8cf0a0663e82/volumes" Jan 31 15:15:10 crc kubenswrapper[4751]: E0131 15:15:10.269385 4751 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 15:15:10 crc kubenswrapper[4751]: E0131 15:15:10.270421 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:17:12.270380754 +0000 UTC m=+2136.645093679 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : secret "openstack-config-secret" not found Jan 31 15:15:10 crc kubenswrapper[4751]: E0131 15:15:10.271366 4751 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 15:15:10 crc kubenswrapper[4751]: E0131 15:15:10.271458 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:17:12.271433142 +0000 UTC m=+2136.646146067 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : configmap "openstack-config" not found Jan 31 15:15:32 crc kubenswrapper[4751]: I0131 15:15:32.187180 4751 generic.go:334] "Generic (PLEG): container finished" podID="8169bcf4-4b12-4458-af34-08e57ab8e72a" containerID="cc356eac62cc8a9dc1d2dd948da59d8ed49fe19d58d0b369f5e6af812f05dc08" exitCode=0 Jan 31 15:15:32 crc kubenswrapper[4751]: I0131 15:15:32.187627 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-n5n7n/must-gather-kfmqg" event={"ID":"8169bcf4-4b12-4458-af34-08e57ab8e72a","Type":"ContainerDied","Data":"cc356eac62cc8a9dc1d2dd948da59d8ed49fe19d58d0b369f5e6af812f05dc08"} Jan 31 15:15:32 crc kubenswrapper[4751]: I0131 15:15:32.188195 4751 scope.go:117] "RemoveContainer" containerID="cc356eac62cc8a9dc1d2dd948da59d8ed49fe19d58d0b369f5e6af812f05dc08" Jan 31 15:15:33 crc kubenswrapper[4751]: I0131 15:15:33.046697 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n5n7n_must-gather-kfmqg_8169bcf4-4b12-4458-af34-08e57ab8e72a/gather/0.log" Jan 31 15:15:41 crc kubenswrapper[4751]: I0131 15:15:41.744696 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-n5n7n/must-gather-kfmqg"] Jan 31 15:15:41 crc kubenswrapper[4751]: I0131 15:15:41.745754 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-must-gather-n5n7n/must-gather-kfmqg" podUID="8169bcf4-4b12-4458-af34-08e57ab8e72a" containerName="copy" containerID="cri-o://d92eb14a52593d6fc22bb08aec8df39b0e7882a4fa59d23aac4f2c37f671f001" gracePeriod=2 Jan 31 15:15:41 crc kubenswrapper[4751]: I0131 15:15:41.753942 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-n5n7n/must-gather-kfmqg"] Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.256706 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n5n7n_must-gather-kfmqg_8169bcf4-4b12-4458-af34-08e57ab8e72a/copy/0.log" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.257800 4751 generic.go:334] "Generic (PLEG): container finished" podID="8169bcf4-4b12-4458-af34-08e57ab8e72a" containerID="d92eb14a52593d6fc22bb08aec8df39b0e7882a4fa59d23aac4f2c37f671f001" exitCode=143 Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.301755 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n5n7n_must-gather-kfmqg_8169bcf4-4b12-4458-af34-08e57ab8e72a/copy/0.log" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.302141 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n5n7n/must-gather-kfmqg" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.335276 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fh4fz\" (UniqueName: \"kubernetes.io/projected/8169bcf4-4b12-4458-af34-08e57ab8e72a-kube-api-access-fh4fz\") pod \"8169bcf4-4b12-4458-af34-08e57ab8e72a\" (UID: \"8169bcf4-4b12-4458-af34-08e57ab8e72a\") " Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.335395 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8169bcf4-4b12-4458-af34-08e57ab8e72a-must-gather-output\") pod \"8169bcf4-4b12-4458-af34-08e57ab8e72a\" (UID: \"8169bcf4-4b12-4458-af34-08e57ab8e72a\") " Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.355713 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8169bcf4-4b12-4458-af34-08e57ab8e72a-kube-api-access-fh4fz" (OuterVolumeSpecName: "kube-api-access-fh4fz") pod "8169bcf4-4b12-4458-af34-08e57ab8e72a" (UID: "8169bcf4-4b12-4458-af34-08e57ab8e72a"). InnerVolumeSpecName "kube-api-access-fh4fz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.418776 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8169bcf4-4b12-4458-af34-08e57ab8e72a-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "8169bcf4-4b12-4458-af34-08e57ab8e72a" (UID: "8169bcf4-4b12-4458-af34-08e57ab8e72a"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.422520 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8169bcf4-4b12-4458-af34-08e57ab8e72a" path="/var/lib/kubelet/pods/8169bcf4-4b12-4458-af34-08e57ab8e72a/volumes" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.438275 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fh4fz\" (UniqueName: \"kubernetes.io/projected/8169bcf4-4b12-4458-af34-08e57ab8e72a-kube-api-access-fh4fz\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.438304 4751 reconciler_common.go:293] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/8169bcf4-4b12-4458-af34-08e57ab8e72a-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.537871 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7q8qd"] Jan 31 15:15:42 crc kubenswrapper[4751]: E0131 15:15:42.538181 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8169bcf4-4b12-4458-af34-08e57ab8e72a" containerName="gather" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.538197 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8169bcf4-4b12-4458-af34-08e57ab8e72a" containerName="gather" Jan 31 15:15:42 crc kubenswrapper[4751]: E0131 15:15:42.538216 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8169bcf4-4b12-4458-af34-08e57ab8e72a" containerName="copy" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.538224 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="8169bcf4-4b12-4458-af34-08e57ab8e72a" containerName="copy" Jan 31 15:15:42 crc kubenswrapper[4751]: E0131 15:15:42.538249 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ce076099-f84a-49c6-9566-cae17c8efd6d" containerName="collect-profiles" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.538257 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce076099-f84a-49c6-9566-cae17c8efd6d" containerName="collect-profiles" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.538392 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="ce076099-f84a-49c6-9566-cae17c8efd6d" containerName="collect-profiles" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.538412 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8169bcf4-4b12-4458-af34-08e57ab8e72a" containerName="gather" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.538422 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="8169bcf4-4b12-4458-af34-08e57ab8e72a" containerName="copy" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.539381 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.551307 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7q8qd"] Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.640465 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-catalog-content\") pod \"community-operators-7q8qd\" (UID: \"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611\") " pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.640509 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz9xp\" (UniqueName: \"kubernetes.io/projected/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-kube-api-access-qz9xp\") pod \"community-operators-7q8qd\" (UID: \"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611\") " pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.640560 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-utilities\") pod \"community-operators-7q8qd\" (UID: \"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611\") " pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.741765 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-catalog-content\") pod \"community-operators-7q8qd\" (UID: \"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611\") " pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.741818 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qz9xp\" (UniqueName: \"kubernetes.io/projected/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-kube-api-access-qz9xp\") pod \"community-operators-7q8qd\" (UID: \"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611\") " pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.741894 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-utilities\") pod \"community-operators-7q8qd\" (UID: \"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611\") " pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.742287 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-catalog-content\") pod \"community-operators-7q8qd\" (UID: \"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611\") " pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.742379 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-utilities\") pod \"community-operators-7q8qd\" (UID: \"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611\") " pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.758836 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz9xp\" (UniqueName: \"kubernetes.io/projected/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-kube-api-access-qz9xp\") pod \"community-operators-7q8qd\" (UID: \"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611\") " pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:42 crc kubenswrapper[4751]: I0131 15:15:42.860227 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:43 crc kubenswrapper[4751]: I0131 15:15:43.199921 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7q8qd"] Jan 31 15:15:43 crc kubenswrapper[4751]: I0131 15:15:43.264204 4751 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-n5n7n_must-gather-kfmqg_8169bcf4-4b12-4458-af34-08e57ab8e72a/copy/0.log" Jan 31 15:15:43 crc kubenswrapper[4751]: I0131 15:15:43.264553 4751 scope.go:117] "RemoveContainer" containerID="d92eb14a52593d6fc22bb08aec8df39b0e7882a4fa59d23aac4f2c37f671f001" Jan 31 15:15:43 crc kubenswrapper[4751]: I0131 15:15:43.264621 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-n5n7n/must-gather-kfmqg" Jan 31 15:15:43 crc kubenswrapper[4751]: I0131 15:15:43.265481 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q8qd" event={"ID":"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611","Type":"ContainerStarted","Data":"afe835c1ad53315ecbe4f222bec28e72de4c1ccdcb10edb681720dd2cc8c1f4c"} Jan 31 15:15:43 crc kubenswrapper[4751]: I0131 15:15:43.279195 4751 scope.go:117] "RemoveContainer" containerID="cc356eac62cc8a9dc1d2dd948da59d8ed49fe19d58d0b369f5e6af812f05dc08" Jan 31 15:15:44 crc kubenswrapper[4751]: I0131 15:15:44.274583 4751 generic.go:334] "Generic (PLEG): container finished" podID="f0de97a3-8d1b-4cb9-baf2-94cdac6fa611" containerID="abd04023c388da640bc843dc513c7f9d80f0d9a758b339a8165a2bc9c26df42f" exitCode=0 Jan 31 15:15:44 crc kubenswrapper[4751]: I0131 15:15:44.274695 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q8qd" event={"ID":"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611","Type":"ContainerDied","Data":"abd04023c388da640bc843dc513c7f9d80f0d9a758b339a8165a2bc9c26df42f"} Jan 31 15:15:44 crc kubenswrapper[4751]: I0131 15:15:44.278240 4751 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 31 15:15:46 crc kubenswrapper[4751]: I0131 15:15:46.293961 4751 generic.go:334] "Generic (PLEG): container finished" podID="f0de97a3-8d1b-4cb9-baf2-94cdac6fa611" containerID="07a29af813c56d3015d9196299ce1c32648d906dd1592919368710f2b8adff3d" exitCode=0 Jan 31 15:15:46 crc kubenswrapper[4751]: I0131 15:15:46.294063 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q8qd" event={"ID":"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611","Type":"ContainerDied","Data":"07a29af813c56d3015d9196299ce1c32648d906dd1592919368710f2b8adff3d"} Jan 31 15:15:47 crc kubenswrapper[4751]: I0131 15:15:47.303476 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q8qd" event={"ID":"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611","Type":"ContainerStarted","Data":"3a1afa3565b79940e6783ac512f7eb65596e794ecce6cdfddc4f941b25daaa31"} Jan 31 15:15:47 crc kubenswrapper[4751]: I0131 15:15:47.340744 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7q8qd" podStartSLOduration=2.912751546 podStartE2EDuration="5.340719813s" podCreationTimestamp="2026-01-31 15:15:42 +0000 UTC" firstStartedPulling="2026-01-31 15:15:44.277892486 +0000 UTC m=+2048.652605381" lastFinishedPulling="2026-01-31 15:15:46.705860753 +0000 UTC m=+2051.080573648" observedRunningTime="2026-01-31 15:15:47.335354619 +0000 UTC m=+2051.710067514" watchObservedRunningTime="2026-01-31 15:15:47.340719813 +0000 UTC m=+2051.715432698" Jan 31 15:15:48 crc kubenswrapper[4751]: I0131 15:15:48.183375 4751 scope.go:117] "RemoveContainer" containerID="9d26a6d6092efc3cfe1b53bda2539e32fc75d0f27a288ecda4b2062254a0fc73" Jan 31 15:15:52 crc kubenswrapper[4751]: I0131 15:15:52.860569 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:52 crc kubenswrapper[4751]: I0131 15:15:52.861100 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:52 crc kubenswrapper[4751]: I0131 15:15:52.912809 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:53 crc kubenswrapper[4751]: I0131 15:15:53.389712 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:53 crc kubenswrapper[4751]: I0131 15:15:53.434986 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7q8qd"] Jan 31 15:15:55 crc kubenswrapper[4751]: I0131 15:15:55.352664 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7q8qd" podUID="f0de97a3-8d1b-4cb9-baf2-94cdac6fa611" containerName="registry-server" containerID="cri-o://3a1afa3565b79940e6783ac512f7eb65596e794ecce6cdfddc4f941b25daaa31" gracePeriod=2 Jan 31 15:15:55 crc kubenswrapper[4751]: I0131 15:15:55.711014 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:55 crc kubenswrapper[4751]: I0131 15:15:55.816300 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz9xp\" (UniqueName: \"kubernetes.io/projected/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-kube-api-access-qz9xp\") pod \"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611\" (UID: \"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611\") " Jan 31 15:15:55 crc kubenswrapper[4751]: I0131 15:15:55.816443 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-catalog-content\") pod \"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611\" (UID: \"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611\") " Jan 31 15:15:55 crc kubenswrapper[4751]: I0131 15:15:55.817138 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-utilities\") pod \"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611\" (UID: \"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611\") " Jan 31 15:15:55 crc kubenswrapper[4751]: I0131 15:15:55.817550 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-utilities" (OuterVolumeSpecName: "utilities") pod "f0de97a3-8d1b-4cb9-baf2-94cdac6fa611" (UID: "f0de97a3-8d1b-4cb9-baf2-94cdac6fa611"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:55 crc kubenswrapper[4751]: I0131 15:15:55.821940 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-kube-api-access-qz9xp" (OuterVolumeSpecName: "kube-api-access-qz9xp") pod "f0de97a3-8d1b-4cb9-baf2-94cdac6fa611" (UID: "f0de97a3-8d1b-4cb9-baf2-94cdac6fa611"). InnerVolumeSpecName "kube-api-access-qz9xp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:15:55 crc kubenswrapper[4751]: I0131 15:15:55.868350 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f0de97a3-8d1b-4cb9-baf2-94cdac6fa611" (UID: "f0de97a3-8d1b-4cb9-baf2-94cdac6fa611"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:15:55 crc kubenswrapper[4751]: I0131 15:15:55.918727 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qz9xp\" (UniqueName: \"kubernetes.io/projected/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-kube-api-access-qz9xp\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:55 crc kubenswrapper[4751]: I0131 15:15:55.918774 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:55 crc kubenswrapper[4751]: I0131 15:15:55.918792 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.364007 4751 generic.go:334] "Generic (PLEG): container finished" podID="f0de97a3-8d1b-4cb9-baf2-94cdac6fa611" containerID="3a1afa3565b79940e6783ac512f7eb65596e794ecce6cdfddc4f941b25daaa31" exitCode=0 Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.364057 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q8qd" event={"ID":"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611","Type":"ContainerDied","Data":"3a1afa3565b79940e6783ac512f7eb65596e794ecce6cdfddc4f941b25daaa31"} Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.364110 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7q8qd" event={"ID":"f0de97a3-8d1b-4cb9-baf2-94cdac6fa611","Type":"ContainerDied","Data":"afe835c1ad53315ecbe4f222bec28e72de4c1ccdcb10edb681720dd2cc8c1f4c"} Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.364151 4751 scope.go:117] "RemoveContainer" containerID="3a1afa3565b79940e6783ac512f7eb65596e794ecce6cdfddc4f941b25daaa31" Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.364197 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7q8qd" Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.397917 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7q8qd"] Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.400237 4751 scope.go:117] "RemoveContainer" containerID="07a29af813c56d3015d9196299ce1c32648d906dd1592919368710f2b8adff3d" Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.402763 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7q8qd"] Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.413920 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f0de97a3-8d1b-4cb9-baf2-94cdac6fa611" path="/var/lib/kubelet/pods/f0de97a3-8d1b-4cb9-baf2-94cdac6fa611/volumes" Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.421188 4751 scope.go:117] "RemoveContainer" containerID="abd04023c388da640bc843dc513c7f9d80f0d9a758b339a8165a2bc9c26df42f" Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.441688 4751 scope.go:117] "RemoveContainer" containerID="3a1afa3565b79940e6783ac512f7eb65596e794ecce6cdfddc4f941b25daaa31" Jan 31 15:15:56 crc kubenswrapper[4751]: E0131 15:15:56.442119 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a1afa3565b79940e6783ac512f7eb65596e794ecce6cdfddc4f941b25daaa31\": container with ID starting with 3a1afa3565b79940e6783ac512f7eb65596e794ecce6cdfddc4f941b25daaa31 not found: ID does not exist" containerID="3a1afa3565b79940e6783ac512f7eb65596e794ecce6cdfddc4f941b25daaa31" Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.442163 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a1afa3565b79940e6783ac512f7eb65596e794ecce6cdfddc4f941b25daaa31"} err="failed to get container status \"3a1afa3565b79940e6783ac512f7eb65596e794ecce6cdfddc4f941b25daaa31\": rpc error: code = NotFound desc = could not find container \"3a1afa3565b79940e6783ac512f7eb65596e794ecce6cdfddc4f941b25daaa31\": container with ID starting with 3a1afa3565b79940e6783ac512f7eb65596e794ecce6cdfddc4f941b25daaa31 not found: ID does not exist" Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.442188 4751 scope.go:117] "RemoveContainer" containerID="07a29af813c56d3015d9196299ce1c32648d906dd1592919368710f2b8adff3d" Jan 31 15:15:56 crc kubenswrapper[4751]: E0131 15:15:56.442564 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"07a29af813c56d3015d9196299ce1c32648d906dd1592919368710f2b8adff3d\": container with ID starting with 07a29af813c56d3015d9196299ce1c32648d906dd1592919368710f2b8adff3d not found: ID does not exist" containerID="07a29af813c56d3015d9196299ce1c32648d906dd1592919368710f2b8adff3d" Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.442601 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"07a29af813c56d3015d9196299ce1c32648d906dd1592919368710f2b8adff3d"} err="failed to get container status \"07a29af813c56d3015d9196299ce1c32648d906dd1592919368710f2b8adff3d\": rpc error: code = NotFound desc = could not find container \"07a29af813c56d3015d9196299ce1c32648d906dd1592919368710f2b8adff3d\": container with ID starting with 07a29af813c56d3015d9196299ce1c32648d906dd1592919368710f2b8adff3d not found: ID does not exist" Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.442626 4751 scope.go:117] "RemoveContainer" containerID="abd04023c388da640bc843dc513c7f9d80f0d9a758b339a8165a2bc9c26df42f" Jan 31 15:15:56 crc kubenswrapper[4751]: E0131 15:15:56.442889 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abd04023c388da640bc843dc513c7f9d80f0d9a758b339a8165a2bc9c26df42f\": container with ID starting with abd04023c388da640bc843dc513c7f9d80f0d9a758b339a8165a2bc9c26df42f not found: ID does not exist" containerID="abd04023c388da640bc843dc513c7f9d80f0d9a758b339a8165a2bc9c26df42f" Jan 31 15:15:56 crc kubenswrapper[4751]: I0131 15:15:56.442915 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abd04023c388da640bc843dc513c7f9d80f0d9a758b339a8165a2bc9c26df42f"} err="failed to get container status \"abd04023c388da640bc843dc513c7f9d80f0d9a758b339a8165a2bc9c26df42f\": rpc error: code = NotFound desc = could not find container \"abd04023c388da640bc843dc513c7f9d80f0d9a758b339a8165a2bc9c26df42f\": container with ID starting with abd04023c388da640bc843dc513c7f9d80f0d9a758b339a8165a2bc9c26df42f not found: ID does not exist" Jan 31 15:17:08 crc kubenswrapper[4751]: I0131 15:17:08.896208 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:17:08 crc kubenswrapper[4751]: I0131 15:17:08.898099 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:17:12 crc kubenswrapper[4751]: E0131 15:17:12.318503 4751 configmap.go:193] Couldn't get configMap glance-kuttl-tests/openstack-config: configmap "openstack-config" not found Jan 31 15:17:12 crc kubenswrapper[4751]: E0131 15:17:12.318769 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:19:14.318753784 +0000 UTC m=+2258.693466669 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config" (UniqueName: "kubernetes.io/configmap/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : configmap "openstack-config" not found Jan 31 15:17:12 crc kubenswrapper[4751]: E0131 15:17:12.318549 4751 secret.go:188] Couldn't get secret glance-kuttl-tests/openstack-config-secret: secret "openstack-config-secret" not found Jan 31 15:17:12 crc kubenswrapper[4751]: E0131 15:17:12.318882 4751 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret podName:eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb nodeName:}" failed. No retries permitted until 2026-01-31 15:19:14.318862597 +0000 UTC m=+2258.693575482 (durationBeforeRetry 2m2s). Error: MountVolume.SetUp failed for volume "openstack-config-secret" (UniqueName: "kubernetes.io/secret/eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb-openstack-config-secret") pod "openstackclient" (UID: "eb3a93b7-3d78-48ab-b2e0-d31a7adefaeb") : secret "openstack-config-secret" not found Jan 31 15:17:28 crc kubenswrapper[4751]: I0131 15:17:28.832247 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-5fvzz"] Jan 31 15:17:28 crc kubenswrapper[4751]: E0131 15:17:28.833219 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0de97a3-8d1b-4cb9-baf2-94cdac6fa611" containerName="extract-content" Jan 31 15:17:28 crc kubenswrapper[4751]: I0131 15:17:28.833241 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0de97a3-8d1b-4cb9-baf2-94cdac6fa611" containerName="extract-content" Jan 31 15:17:28 crc kubenswrapper[4751]: E0131 15:17:28.833274 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0de97a3-8d1b-4cb9-baf2-94cdac6fa611" containerName="registry-server" Jan 31 15:17:28 crc kubenswrapper[4751]: I0131 15:17:28.833286 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0de97a3-8d1b-4cb9-baf2-94cdac6fa611" containerName="registry-server" Jan 31 15:17:28 crc kubenswrapper[4751]: E0131 15:17:28.833325 4751 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f0de97a3-8d1b-4cb9-baf2-94cdac6fa611" containerName="extract-utilities" Jan 31 15:17:28 crc kubenswrapper[4751]: I0131 15:17:28.833338 4751 state_mem.go:107] "Deleted CPUSet assignment" podUID="f0de97a3-8d1b-4cb9-baf2-94cdac6fa611" containerName="extract-utilities" Jan 31 15:17:28 crc kubenswrapper[4751]: I0131 15:17:28.833530 4751 memory_manager.go:354] "RemoveStaleState removing state" podUID="f0de97a3-8d1b-4cb9-baf2-94cdac6fa611" containerName="registry-server" Jan 31 15:17:28 crc kubenswrapper[4751]: I0131 15:17:28.837096 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:28 crc kubenswrapper[4751]: I0131 15:17:28.846243 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5fvzz"] Jan 31 15:17:28 crc kubenswrapper[4751]: I0131 15:17:28.944241 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-catalog-content\") pod \"redhat-marketplace-5fvzz\" (UID: \"aa2d3a4e-15bd-4b0d-b187-a4db9049522f\") " pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:28 crc kubenswrapper[4751]: I0131 15:17:28.944561 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7p44s\" (UniqueName: \"kubernetes.io/projected/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-kube-api-access-7p44s\") pod \"redhat-marketplace-5fvzz\" (UID: \"aa2d3a4e-15bd-4b0d-b187-a4db9049522f\") " pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:28 crc kubenswrapper[4751]: I0131 15:17:28.944685 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-utilities\") pod \"redhat-marketplace-5fvzz\" (UID: \"aa2d3a4e-15bd-4b0d-b187-a4db9049522f\") " pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:29 crc kubenswrapper[4751]: I0131 15:17:29.045445 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-catalog-content\") pod \"redhat-marketplace-5fvzz\" (UID: \"aa2d3a4e-15bd-4b0d-b187-a4db9049522f\") " pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:29 crc kubenswrapper[4751]: I0131 15:17:29.045761 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7p44s\" (UniqueName: \"kubernetes.io/projected/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-kube-api-access-7p44s\") pod \"redhat-marketplace-5fvzz\" (UID: \"aa2d3a4e-15bd-4b0d-b187-a4db9049522f\") " pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:29 crc kubenswrapper[4751]: I0131 15:17:29.045880 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-utilities\") pod \"redhat-marketplace-5fvzz\" (UID: \"aa2d3a4e-15bd-4b0d-b187-a4db9049522f\") " pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:29 crc kubenswrapper[4751]: I0131 15:17:29.046512 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-utilities\") pod \"redhat-marketplace-5fvzz\" (UID: \"aa2d3a4e-15bd-4b0d-b187-a4db9049522f\") " pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:29 crc kubenswrapper[4751]: I0131 15:17:29.046876 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-catalog-content\") pod \"redhat-marketplace-5fvzz\" (UID: \"aa2d3a4e-15bd-4b0d-b187-a4db9049522f\") " pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:29 crc kubenswrapper[4751]: I0131 15:17:29.076703 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7p44s\" (UniqueName: \"kubernetes.io/projected/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-kube-api-access-7p44s\") pod \"redhat-marketplace-5fvzz\" (UID: \"aa2d3a4e-15bd-4b0d-b187-a4db9049522f\") " pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:29 crc kubenswrapper[4751]: I0131 15:17:29.165024 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:29 crc kubenswrapper[4751]: I0131 15:17:29.564228 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-5fvzz"] Jan 31 15:17:30 crc kubenswrapper[4751]: I0131 15:17:30.030419 4751 generic.go:334] "Generic (PLEG): container finished" podID="aa2d3a4e-15bd-4b0d-b187-a4db9049522f" containerID="1bb577b2261a76c8b92f3b1e2f277f6b87a8febd1df30295d8a1636ce64ffba7" exitCode=0 Jan 31 15:17:30 crc kubenswrapper[4751]: I0131 15:17:30.030494 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fvzz" event={"ID":"aa2d3a4e-15bd-4b0d-b187-a4db9049522f","Type":"ContainerDied","Data":"1bb577b2261a76c8b92f3b1e2f277f6b87a8febd1df30295d8a1636ce64ffba7"} Jan 31 15:17:30 crc kubenswrapper[4751]: I0131 15:17:30.030768 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fvzz" event={"ID":"aa2d3a4e-15bd-4b0d-b187-a4db9049522f","Type":"ContainerStarted","Data":"425504c025aad8a53ebdc18ebf06cc6728cfc8eaec5f1ce8efad85e1606a66c7"} Jan 31 15:17:31 crc kubenswrapper[4751]: I0131 15:17:31.039452 4751 generic.go:334] "Generic (PLEG): container finished" podID="aa2d3a4e-15bd-4b0d-b187-a4db9049522f" containerID="f5cc5de70aba5a8a756cd1dc84d47958d0b1e225fc2bd378950d452ba598ee88" exitCode=0 Jan 31 15:17:31 crc kubenswrapper[4751]: I0131 15:17:31.039517 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fvzz" event={"ID":"aa2d3a4e-15bd-4b0d-b187-a4db9049522f","Type":"ContainerDied","Data":"f5cc5de70aba5a8a756cd1dc84d47958d0b1e225fc2bd378950d452ba598ee88"} Jan 31 15:17:32 crc kubenswrapper[4751]: I0131 15:17:32.048148 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fvzz" event={"ID":"aa2d3a4e-15bd-4b0d-b187-a4db9049522f","Type":"ContainerStarted","Data":"f2e5db66425ffe36121edf1059e80641ecbb035af5d94d86cad041ad2f36fd08"} Jan 31 15:17:32 crc kubenswrapper[4751]: I0131 15:17:32.074176 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-5fvzz" podStartSLOduration=2.659696797 podStartE2EDuration="4.074159997s" podCreationTimestamp="2026-01-31 15:17:28 +0000 UTC" firstStartedPulling="2026-01-31 15:17:30.031910007 +0000 UTC m=+2154.406622892" lastFinishedPulling="2026-01-31 15:17:31.446373197 +0000 UTC m=+2155.821086092" observedRunningTime="2026-01-31 15:17:32.070810218 +0000 UTC m=+2156.445523103" watchObservedRunningTime="2026-01-31 15:17:32.074159997 +0000 UTC m=+2156.448872882" Jan 31 15:17:35 crc kubenswrapper[4751]: I0131 15:17:35.199425 4751 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-884dp"] Jan 31 15:17:35 crc kubenswrapper[4751]: I0131 15:17:35.200920 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-884dp" Jan 31 15:17:35 crc kubenswrapper[4751]: I0131 15:17:35.211884 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-884dp"] Jan 31 15:17:35 crc kubenswrapper[4751]: I0131 15:17:35.243340 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gnvk2\" (UniqueName: \"kubernetes.io/projected/22e797bc-1dbd-481f-bb51-c4a04114ecda-kube-api-access-gnvk2\") pod \"redhat-operators-884dp\" (UID: \"22e797bc-1dbd-481f-bb51-c4a04114ecda\") " pod="openshift-marketplace/redhat-operators-884dp" Jan 31 15:17:35 crc kubenswrapper[4751]: I0131 15:17:35.243465 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22e797bc-1dbd-481f-bb51-c4a04114ecda-utilities\") pod \"redhat-operators-884dp\" (UID: \"22e797bc-1dbd-481f-bb51-c4a04114ecda\") " pod="openshift-marketplace/redhat-operators-884dp" Jan 31 15:17:35 crc kubenswrapper[4751]: I0131 15:17:35.243576 4751 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22e797bc-1dbd-481f-bb51-c4a04114ecda-catalog-content\") pod \"redhat-operators-884dp\" (UID: \"22e797bc-1dbd-481f-bb51-c4a04114ecda\") " pod="openshift-marketplace/redhat-operators-884dp" Jan 31 15:17:35 crc kubenswrapper[4751]: I0131 15:17:35.344785 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gnvk2\" (UniqueName: \"kubernetes.io/projected/22e797bc-1dbd-481f-bb51-c4a04114ecda-kube-api-access-gnvk2\") pod \"redhat-operators-884dp\" (UID: \"22e797bc-1dbd-481f-bb51-c4a04114ecda\") " pod="openshift-marketplace/redhat-operators-884dp" Jan 31 15:17:35 crc kubenswrapper[4751]: I0131 15:17:35.344855 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22e797bc-1dbd-481f-bb51-c4a04114ecda-utilities\") pod \"redhat-operators-884dp\" (UID: \"22e797bc-1dbd-481f-bb51-c4a04114ecda\") " pod="openshift-marketplace/redhat-operators-884dp" Jan 31 15:17:35 crc kubenswrapper[4751]: I0131 15:17:35.344926 4751 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22e797bc-1dbd-481f-bb51-c4a04114ecda-catalog-content\") pod \"redhat-operators-884dp\" (UID: \"22e797bc-1dbd-481f-bb51-c4a04114ecda\") " pod="openshift-marketplace/redhat-operators-884dp" Jan 31 15:17:35 crc kubenswrapper[4751]: I0131 15:17:35.345479 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/22e797bc-1dbd-481f-bb51-c4a04114ecda-catalog-content\") pod \"redhat-operators-884dp\" (UID: \"22e797bc-1dbd-481f-bb51-c4a04114ecda\") " pod="openshift-marketplace/redhat-operators-884dp" Jan 31 15:17:35 crc kubenswrapper[4751]: I0131 15:17:35.345653 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/22e797bc-1dbd-481f-bb51-c4a04114ecda-utilities\") pod \"redhat-operators-884dp\" (UID: \"22e797bc-1dbd-481f-bb51-c4a04114ecda\") " pod="openshift-marketplace/redhat-operators-884dp" Jan 31 15:17:35 crc kubenswrapper[4751]: I0131 15:17:35.370475 4751 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gnvk2\" (UniqueName: \"kubernetes.io/projected/22e797bc-1dbd-481f-bb51-c4a04114ecda-kube-api-access-gnvk2\") pod \"redhat-operators-884dp\" (UID: \"22e797bc-1dbd-481f-bb51-c4a04114ecda\") " pod="openshift-marketplace/redhat-operators-884dp" Jan 31 15:17:35 crc kubenswrapper[4751]: I0131 15:17:35.524747 4751 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-884dp" Jan 31 15:17:35 crc kubenswrapper[4751]: I0131 15:17:35.770584 4751 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-884dp"] Jan 31 15:17:36 crc kubenswrapper[4751]: I0131 15:17:36.073057 4751 generic.go:334] "Generic (PLEG): container finished" podID="22e797bc-1dbd-481f-bb51-c4a04114ecda" containerID="a89d2ffa602b194f1e3ddd9004485c6ff999c6fb57b08c748be6c30f28b34770" exitCode=0 Jan 31 15:17:36 crc kubenswrapper[4751]: I0131 15:17:36.073327 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-884dp" event={"ID":"22e797bc-1dbd-481f-bb51-c4a04114ecda","Type":"ContainerDied","Data":"a89d2ffa602b194f1e3ddd9004485c6ff999c6fb57b08c748be6c30f28b34770"} Jan 31 15:17:36 crc kubenswrapper[4751]: I0131 15:17:36.073822 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-884dp" event={"ID":"22e797bc-1dbd-481f-bb51-c4a04114ecda","Type":"ContainerStarted","Data":"03a99742f46b04ca6f4e4681bc1cc7dbb9c2e6413829a7704f70d7b3e8fddd54"} Jan 31 15:17:37 crc kubenswrapper[4751]: I0131 15:17:37.081170 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-884dp" event={"ID":"22e797bc-1dbd-481f-bb51-c4a04114ecda","Type":"ContainerStarted","Data":"45d268c59b0d70d8f64877dfb7af473e60a933653390856ec83bb4377a58cb74"} Jan 31 15:17:38 crc kubenswrapper[4751]: I0131 15:17:38.090299 4751 generic.go:334] "Generic (PLEG): container finished" podID="22e797bc-1dbd-481f-bb51-c4a04114ecda" containerID="45d268c59b0d70d8f64877dfb7af473e60a933653390856ec83bb4377a58cb74" exitCode=0 Jan 31 15:17:38 crc kubenswrapper[4751]: I0131 15:17:38.090350 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-884dp" event={"ID":"22e797bc-1dbd-481f-bb51-c4a04114ecda","Type":"ContainerDied","Data":"45d268c59b0d70d8f64877dfb7af473e60a933653390856ec83bb4377a58cb74"} Jan 31 15:17:38 crc kubenswrapper[4751]: I0131 15:17:38.896317 4751 patch_prober.go:28] interesting pod/machine-config-daemon-2wpj7 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 31 15:17:38 crc kubenswrapper[4751]: I0131 15:17:38.896401 4751 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-2wpj7" podUID="b4c170e8-22c9-43a9-8b34-9d626c2ccddc" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 31 15:17:39 crc kubenswrapper[4751]: I0131 15:17:39.165417 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:39 crc kubenswrapper[4751]: I0131 15:17:39.165502 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:39 crc kubenswrapper[4751]: I0131 15:17:39.206345 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:40 crc kubenswrapper[4751]: I0131 15:17:40.102546 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-884dp" event={"ID":"22e797bc-1dbd-481f-bb51-c4a04114ecda","Type":"ContainerStarted","Data":"e2ed793a1f8ffc1705a8422a83553acff596b5770ccaf5a3d6759f957020a04b"} Jan 31 15:17:40 crc kubenswrapper[4751]: I0131 15:17:40.134283 4751 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-884dp" podStartSLOduration=1.640223669 podStartE2EDuration="5.134262649s" podCreationTimestamp="2026-01-31 15:17:35 +0000 UTC" firstStartedPulling="2026-01-31 15:17:36.075107065 +0000 UTC m=+2160.449819950" lastFinishedPulling="2026-01-31 15:17:39.569146035 +0000 UTC m=+2163.943858930" observedRunningTime="2026-01-31 15:17:40.133793817 +0000 UTC m=+2164.508506702" watchObservedRunningTime="2026-01-31 15:17:40.134262649 +0000 UTC m=+2164.508975534" Jan 31 15:17:40 crc kubenswrapper[4751]: I0131 15:17:40.156688 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:40 crc kubenswrapper[4751]: I0131 15:17:40.588466 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5fvzz"] Jan 31 15:17:42 crc kubenswrapper[4751]: I0131 15:17:42.114107 4751 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-5fvzz" podUID="aa2d3a4e-15bd-4b0d-b187-a4db9049522f" containerName="registry-server" containerID="cri-o://f2e5db66425ffe36121edf1059e80641ecbb035af5d94d86cad041ad2f36fd08" gracePeriod=2 Jan 31 15:17:42 crc kubenswrapper[4751]: I0131 15:17:42.647850 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:42 crc kubenswrapper[4751]: I0131 15:17:42.742549 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-catalog-content\") pod \"aa2d3a4e-15bd-4b0d-b187-a4db9049522f\" (UID: \"aa2d3a4e-15bd-4b0d-b187-a4db9049522f\") " Jan 31 15:17:42 crc kubenswrapper[4751]: I0131 15:17:42.742612 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7p44s\" (UniqueName: \"kubernetes.io/projected/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-kube-api-access-7p44s\") pod \"aa2d3a4e-15bd-4b0d-b187-a4db9049522f\" (UID: \"aa2d3a4e-15bd-4b0d-b187-a4db9049522f\") " Jan 31 15:17:42 crc kubenswrapper[4751]: I0131 15:17:42.742681 4751 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-utilities\") pod \"aa2d3a4e-15bd-4b0d-b187-a4db9049522f\" (UID: \"aa2d3a4e-15bd-4b0d-b187-a4db9049522f\") " Jan 31 15:17:42 crc kubenswrapper[4751]: I0131 15:17:42.743758 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-utilities" (OuterVolumeSpecName: "utilities") pod "aa2d3a4e-15bd-4b0d-b187-a4db9049522f" (UID: "aa2d3a4e-15bd-4b0d-b187-a4db9049522f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:42 crc kubenswrapper[4751]: I0131 15:17:42.747754 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-kube-api-access-7p44s" (OuterVolumeSpecName: "kube-api-access-7p44s") pod "aa2d3a4e-15bd-4b0d-b187-a4db9049522f" (UID: "aa2d3a4e-15bd-4b0d-b187-a4db9049522f"). InnerVolumeSpecName "kube-api-access-7p44s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 31 15:17:42 crc kubenswrapper[4751]: I0131 15:17:42.769504 4751 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aa2d3a4e-15bd-4b0d-b187-a4db9049522f" (UID: "aa2d3a4e-15bd-4b0d-b187-a4db9049522f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 31 15:17:42 crc kubenswrapper[4751]: I0131 15:17:42.844534 4751 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:42 crc kubenswrapper[4751]: I0131 15:17:42.844569 4751 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7p44s\" (UniqueName: \"kubernetes.io/projected/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-kube-api-access-7p44s\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:42 crc kubenswrapper[4751]: I0131 15:17:42.844582 4751 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aa2d3a4e-15bd-4b0d-b187-a4db9049522f-utilities\") on node \"crc\" DevicePath \"\"" Jan 31 15:17:43 crc kubenswrapper[4751]: I0131 15:17:43.122048 4751 generic.go:334] "Generic (PLEG): container finished" podID="aa2d3a4e-15bd-4b0d-b187-a4db9049522f" containerID="f2e5db66425ffe36121edf1059e80641ecbb035af5d94d86cad041ad2f36fd08" exitCode=0 Jan 31 15:17:43 crc kubenswrapper[4751]: I0131 15:17:43.122118 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fvzz" event={"ID":"aa2d3a4e-15bd-4b0d-b187-a4db9049522f","Type":"ContainerDied","Data":"f2e5db66425ffe36121edf1059e80641ecbb035af5d94d86cad041ad2f36fd08"} Jan 31 15:17:43 crc kubenswrapper[4751]: I0131 15:17:43.122186 4751 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-5fvzz" event={"ID":"aa2d3a4e-15bd-4b0d-b187-a4db9049522f","Type":"ContainerDied","Data":"425504c025aad8a53ebdc18ebf06cc6728cfc8eaec5f1ce8efad85e1606a66c7"} Jan 31 15:17:43 crc kubenswrapper[4751]: I0131 15:17:43.122228 4751 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-5fvzz" Jan 31 15:17:43 crc kubenswrapper[4751]: I0131 15:17:43.122211 4751 scope.go:117] "RemoveContainer" containerID="f2e5db66425ffe36121edf1059e80641ecbb035af5d94d86cad041ad2f36fd08" Jan 31 15:17:43 crc kubenswrapper[4751]: I0131 15:17:43.140940 4751 scope.go:117] "RemoveContainer" containerID="f5cc5de70aba5a8a756cd1dc84d47958d0b1e225fc2bd378950d452ba598ee88" Jan 31 15:17:43 crc kubenswrapper[4751]: I0131 15:17:43.161246 4751 scope.go:117] "RemoveContainer" containerID="1bb577b2261a76c8b92f3b1e2f277f6b87a8febd1df30295d8a1636ce64ffba7" Jan 31 15:17:43 crc kubenswrapper[4751]: I0131 15:17:43.196905 4751 scope.go:117] "RemoveContainer" containerID="f2e5db66425ffe36121edf1059e80641ecbb035af5d94d86cad041ad2f36fd08" Jan 31 15:17:43 crc kubenswrapper[4751]: E0131 15:17:43.198448 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f2e5db66425ffe36121edf1059e80641ecbb035af5d94d86cad041ad2f36fd08\": container with ID starting with f2e5db66425ffe36121edf1059e80641ecbb035af5d94d86cad041ad2f36fd08 not found: ID does not exist" containerID="f2e5db66425ffe36121edf1059e80641ecbb035af5d94d86cad041ad2f36fd08" Jan 31 15:17:43 crc kubenswrapper[4751]: I0131 15:17:43.198533 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f2e5db66425ffe36121edf1059e80641ecbb035af5d94d86cad041ad2f36fd08"} err="failed to get container status \"f2e5db66425ffe36121edf1059e80641ecbb035af5d94d86cad041ad2f36fd08\": rpc error: code = NotFound desc = could not find container \"f2e5db66425ffe36121edf1059e80641ecbb035af5d94d86cad041ad2f36fd08\": container with ID starting with f2e5db66425ffe36121edf1059e80641ecbb035af5d94d86cad041ad2f36fd08 not found: ID does not exist" Jan 31 15:17:43 crc kubenswrapper[4751]: I0131 15:17:43.198599 4751 scope.go:117] "RemoveContainer" containerID="f5cc5de70aba5a8a756cd1dc84d47958d0b1e225fc2bd378950d452ba598ee88" Jan 31 15:17:43 crc kubenswrapper[4751]: E0131 15:17:43.200160 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f5cc5de70aba5a8a756cd1dc84d47958d0b1e225fc2bd378950d452ba598ee88\": container with ID starting with f5cc5de70aba5a8a756cd1dc84d47958d0b1e225fc2bd378950d452ba598ee88 not found: ID does not exist" containerID="f5cc5de70aba5a8a756cd1dc84d47958d0b1e225fc2bd378950d452ba598ee88" Jan 31 15:17:43 crc kubenswrapper[4751]: I0131 15:17:43.200260 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f5cc5de70aba5a8a756cd1dc84d47958d0b1e225fc2bd378950d452ba598ee88"} err="failed to get container status \"f5cc5de70aba5a8a756cd1dc84d47958d0b1e225fc2bd378950d452ba598ee88\": rpc error: code = NotFound desc = could not find container \"f5cc5de70aba5a8a756cd1dc84d47958d0b1e225fc2bd378950d452ba598ee88\": container with ID starting with f5cc5de70aba5a8a756cd1dc84d47958d0b1e225fc2bd378950d452ba598ee88 not found: ID does not exist" Jan 31 15:17:43 crc kubenswrapper[4751]: I0131 15:17:43.200345 4751 scope.go:117] "RemoveContainer" containerID="1bb577b2261a76c8b92f3b1e2f277f6b87a8febd1df30295d8a1636ce64ffba7" Jan 31 15:17:43 crc kubenswrapper[4751]: I0131 15:17:43.204846 4751 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-5fvzz"] Jan 31 15:17:43 crc kubenswrapper[4751]: I0131 15:17:43.226185 4751 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-5fvzz"] Jan 31 15:17:43 crc kubenswrapper[4751]: E0131 15:17:43.206238 4751 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1bb577b2261a76c8b92f3b1e2f277f6b87a8febd1df30295d8a1636ce64ffba7\": container with ID starting with 1bb577b2261a76c8b92f3b1e2f277f6b87a8febd1df30295d8a1636ce64ffba7 not found: ID does not exist" containerID="1bb577b2261a76c8b92f3b1e2f277f6b87a8febd1df30295d8a1636ce64ffba7" Jan 31 15:17:43 crc kubenswrapper[4751]: I0131 15:17:43.226284 4751 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1bb577b2261a76c8b92f3b1e2f277f6b87a8febd1df30295d8a1636ce64ffba7"} err="failed to get container status \"1bb577b2261a76c8b92f3b1e2f277f6b87a8febd1df30295d8a1636ce64ffba7\": rpc error: code = NotFound desc = could not find container \"1bb577b2261a76c8b92f3b1e2f277f6b87a8febd1df30295d8a1636ce64ffba7\": container with ID starting with 1bb577b2261a76c8b92f3b1e2f277f6b87a8febd1df30295d8a1636ce64ffba7 not found: ID does not exist" Jan 31 15:17:44 crc kubenswrapper[4751]: I0131 15:17:44.413585 4751 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aa2d3a4e-15bd-4b0d-b187-a4db9049522f" path="/var/lib/kubelet/pods/aa2d3a4e-15bd-4b0d-b187-a4db9049522f/volumes" Jan 31 15:17:45 crc kubenswrapper[4751]: I0131 15:17:45.527329 4751 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-884dp" Jan 31 15:17:45 crc kubenswrapper[4751]: I0131 15:17:45.527746 4751 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-884dp" Jan 31 15:17:46 crc kubenswrapper[4751]: I0131 15:17:46.573994 4751 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-884dp" podUID="22e797bc-1dbd-481f-bb51-c4a04114ecda" containerName="registry-server" probeResult="failure" output=< Jan 31 15:17:46 crc kubenswrapper[4751]: timeout: failed to connect service ":50051" within 1s Jan 31 15:17:46 crc kubenswrapper[4751]: >